00:00:00.001 Started by upstream project "autotest-per-patch" build number 132428 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.072 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.072 The recommended git tool is: git 00:00:00.072 using credential 00000000-0000-0000-0000-000000000002 00:00:00.074 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.119 Fetching changes from the remote Git repository 00:00:00.123 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.179 Using shallow fetch with depth 1 00:00:00.179 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.179 > git --version # timeout=10 00:00:00.242 > git --version # 'git version 2.39.2' 00:00:00.242 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.285 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.285 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.461 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.474 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.486 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.486 > git config core.sparsecheckout # timeout=10 00:00:05.500 > git read-tree -mu HEAD # timeout=10 00:00:05.516 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.540 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.540 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.637 [Pipeline] Start of Pipeline 00:00:05.651 [Pipeline] library 00:00:05.652 Loading library shm_lib@master 00:00:05.653 Library shm_lib@master is cached. Copying from home. 00:00:05.668 [Pipeline] node 00:00:05.680 Running on VM-host-WFP1 in /var/jenkins/workspace/nvme-vg-autotest 00:00:05.682 [Pipeline] { 00:00:05.691 [Pipeline] catchError 00:00:05.693 [Pipeline] { 00:00:05.706 [Pipeline] wrap 00:00:05.715 [Pipeline] { 00:00:05.723 [Pipeline] stage 00:00:05.725 [Pipeline] { (Prologue) 00:00:05.745 [Pipeline] echo 00:00:05.747 Node: VM-host-WFP1 00:00:05.754 [Pipeline] cleanWs 00:00:05.765 [WS-CLEANUP] Deleting project workspace... 00:00:05.765 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.772 [WS-CLEANUP] done 00:00:05.980 [Pipeline] setCustomBuildProperty 00:00:06.070 [Pipeline] httpRequest 00:00:06.613 [Pipeline] echo 00:00:06.615 Sorcerer 10.211.164.20 is alive 00:00:06.624 [Pipeline] retry 00:00:06.625 [Pipeline] { 00:00:06.637 [Pipeline] httpRequest 00:00:06.642 HttpMethod: GET 00:00:06.642 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.643 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.644 Response Code: HTTP/1.1 200 OK 00:00:06.645 Success: Status code 200 is in the accepted range: 200,404 00:00:06.645 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.479 [Pipeline] } 00:00:07.490 [Pipeline] // retry 00:00:07.497 [Pipeline] sh 00:00:07.778 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:07.792 [Pipeline] httpRequest 00:00:08.273 [Pipeline] echo 00:00:08.274 Sorcerer 10.211.164.20 is alive 00:00:08.281 [Pipeline] retry 00:00:08.283 [Pipeline] { 00:00:08.297 [Pipeline] httpRequest 00:00:08.302 HttpMethod: GET 00:00:08.303 URL: http://10.211.164.20/packages/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:08.303 Sending request to url: http://10.211.164.20/packages/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:08.322 Response Code: HTTP/1.1 200 OK 00:00:08.323 Success: Status code 200 is in the accepted range: 200,404 00:00:08.324 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:28.620 [Pipeline] } 00:00:28.639 [Pipeline] // retry 00:00:28.648 [Pipeline] sh 00:00:28.934 + tar --no-same-owner -xf spdk_09ac735c8cf3291eeb6a7441697ca688a18dbe36.tar.gz 00:00:31.487 [Pipeline] sh 00:00:31.776 + git -C spdk log --oneline -n5 00:00:31.776 09ac735c8 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:00:31.776 c1691a126 bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:00:31.776 5c8d99223 bdev: Factor out checking bounce buffer necessity into helper function 00:00:31.776 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:00:31.776 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:00:31.796 [Pipeline] writeFile 00:00:31.811 [Pipeline] sh 00:00:32.097 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:32.110 [Pipeline] sh 00:00:32.399 + cat autorun-spdk.conf 00:00:32.399 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.399 SPDK_TEST_NVME=1 00:00:32.399 SPDK_TEST_FTL=1 00:00:32.399 SPDK_TEST_ISAL=1 00:00:32.399 SPDK_RUN_ASAN=1 00:00:32.399 SPDK_RUN_UBSAN=1 00:00:32.399 SPDK_TEST_XNVME=1 00:00:32.399 SPDK_TEST_NVME_FDP=1 00:00:32.399 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.407 RUN_NIGHTLY=0 00:00:32.409 [Pipeline] } 00:00:32.424 [Pipeline] // stage 00:00:32.440 [Pipeline] stage 00:00:32.443 [Pipeline] { (Run VM) 00:00:32.458 [Pipeline] sh 00:00:32.747 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:32.748 + echo 'Start stage prepare_nvme.sh' 00:00:32.748 Start stage prepare_nvme.sh 00:00:32.748 + [[ -n 0 ]] 00:00:32.748 + disk_prefix=ex0 00:00:32.748 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:00:32.748 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:00:32.748 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:00:32.748 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:32.748 ++ SPDK_TEST_NVME=1 00:00:32.748 ++ SPDK_TEST_FTL=1 00:00:32.748 ++ SPDK_TEST_ISAL=1 00:00:32.748 ++ SPDK_RUN_ASAN=1 00:00:32.748 ++ SPDK_RUN_UBSAN=1 00:00:32.748 ++ SPDK_TEST_XNVME=1 00:00:32.748 ++ SPDK_TEST_NVME_FDP=1 00:00:32.748 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:32.748 ++ RUN_NIGHTLY=0 00:00:32.748 + cd /var/jenkins/workspace/nvme-vg-autotest 00:00:32.748 + nvme_files=() 00:00:32.748 + declare -A nvme_files 00:00:32.748 + backend_dir=/var/lib/libvirt/images/backends 00:00:32.748 + nvme_files['nvme.img']=5G 00:00:32.748 + nvme_files['nvme-cmb.img']=5G 00:00:32.748 + nvme_files['nvme-multi0.img']=4G 00:00:32.748 + nvme_files['nvme-multi1.img']=4G 00:00:32.748 + nvme_files['nvme-multi2.img']=4G 00:00:32.748 + nvme_files['nvme-openstack.img']=8G 00:00:32.748 + nvme_files['nvme-zns.img']=5G 00:00:32.748 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:32.748 + (( SPDK_TEST_FTL == 1 )) 00:00:32.748 + nvme_files["nvme-ftl.img"]=6G 00:00:32.748 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:32.748 + nvme_files["nvme-fdp.img"]=1G 00:00:32.748 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:32.748 + for nvme in "${!nvme_files[@]}" 00:00:32.748 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi2.img -s 4G 00:00:32.748 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:32.748 + for nvme in "${!nvme_files[@]}" 00:00:32.748 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-ftl.img -s 6G 00:00:32.748 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:32.748 + for nvme in "${!nvme_files[@]}" 00:00:32.748 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-cmb.img -s 5G 00:00:32.748 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:32.748 + for nvme in "${!nvme_files[@]}" 00:00:32.748 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-openstack.img -s 8G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:33.007 + for nvme in "${!nvme_files[@]}" 00:00:33.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-zns.img -s 5G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.007 + for nvme in "${!nvme_files[@]}" 00:00:33.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi1.img -s 4G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.007 + for nvme in "${!nvme_files[@]}" 00:00:33.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-multi0.img -s 4G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:33.007 + for nvme in "${!nvme_files[@]}" 00:00:33.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme-fdp.img -s 1G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:33.007 + for nvme in "${!nvme_files[@]}" 00:00:33.007 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex0-nvme.img -s 5G 00:00:33.007 Formatting '/var/lib/libvirt/images/backends/ex0-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:33.007 ++ sudo grep -rl ex0-nvme.img /etc/libvirt/qemu 00:00:33.267 + echo 'End stage prepare_nvme.sh' 00:00:33.267 End stage prepare_nvme.sh 00:00:33.280 [Pipeline] sh 00:00:33.566 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:33.566 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex0-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex0-nvme.img -b /var/lib/libvirt/images/backends/ex0-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex0-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:33.566 00:00:33.566 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:00:33.566 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:00:33.566 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:00:33.566 HELP=0 00:00:33.566 DRY_RUN=0 00:00:33.566 NVME_FILE=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,/var/lib/libvirt/images/backends/ex0-nvme.img,/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,/var/lib/libvirt/images/backends/ex0-nvme-fdp.img, 00:00:33.566 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:33.566 NVME_AUTO_CREATE=0 00:00:33.566 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex0-nvme-multi1.img:/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,, 00:00:33.566 NVME_CMB=,,,, 00:00:33.566 NVME_PMR=,,,, 00:00:33.566 NVME_ZNS=,,,, 00:00:33.566 NVME_MS=true,,,, 00:00:33.566 NVME_FDP=,,,on, 00:00:33.566 SPDK_VAGRANT_DISTRO=fedora39 00:00:33.566 SPDK_VAGRANT_VMCPU=10 00:00:33.566 SPDK_VAGRANT_VMRAM=12288 00:00:33.566 SPDK_VAGRANT_PROVIDER=libvirt 00:00:33.566 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:33.566 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:33.566 SPDK_OPENSTACK_NETWORK=0 00:00:33.566 VAGRANT_PACKAGE_BOX=0 00:00:33.566 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:33.566 FORCE_DISTRO=true 00:00:33.566 VAGRANT_BOX_VERSION= 00:00:33.566 EXTRA_VAGRANTFILES= 00:00:33.566 NIC_MODEL=e1000 00:00:33.566 00:00:33.566 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:00:33.566 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:00:36.108 Bringing machine 'default' up with 'libvirt' provider... 00:00:37.050 ==> default: Creating image (snapshot of base box volume). 00:00:37.311 ==> default: Creating domain with the following settings... 00:00:37.311 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732124103_b43cbf2ae1e806ecf6c5 00:00:37.311 ==> default: -- Domain type: kvm 00:00:37.311 ==> default: -- Cpus: 10 00:00:37.311 ==> default: -- Feature: acpi 00:00:37.311 ==> default: -- Feature: apic 00:00:37.311 ==> default: -- Feature: pae 00:00:37.311 ==> default: -- Memory: 12288M 00:00:37.311 ==> default: -- Memory Backing: hugepages: 00:00:37.311 ==> default: -- Management MAC: 00:00:37.311 ==> default: -- Loader: 00:00:37.311 ==> default: -- Nvram: 00:00:37.311 ==> default: -- Base box: spdk/fedora39 00:00:37.311 ==> default: -- Storage pool: default 00:00:37.311 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732124103_b43cbf2ae1e806ecf6c5.img (20G) 00:00:37.311 ==> default: -- Volume Cache: default 00:00:37.311 ==> default: -- Kernel: 00:00:37.311 ==> default: -- Initrd: 00:00:37.311 ==> default: -- Graphics Type: vnc 00:00:37.311 ==> default: -- Graphics Port: -1 00:00:37.311 ==> default: -- Graphics IP: 127.0.0.1 00:00:37.311 ==> default: -- Graphics Password: Not defined 00:00:37.311 ==> default: -- Video Type: cirrus 00:00:37.311 ==> default: -- Video VRAM: 9216 00:00:37.311 ==> default: -- Sound Type: 00:00:37.311 ==> default: -- Keymap: en-us 00:00:37.311 ==> default: -- TPM Path: 00:00:37.311 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:37.311 ==> default: -- Command line args: 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme.img,if=none,id=nvme-1-drive0, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:00:37.311 ==> default: -> value=-drive, 00:00:37.311 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex0-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:00:37.311 ==> default: -> value=-device, 00:00:37.311 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:37.882 ==> default: Creating shared folders metadata... 00:00:37.882 ==> default: Starting domain. 00:00:39.793 ==> default: Waiting for domain to get an IP address... 00:00:57.936 ==> default: Waiting for SSH to become available... 00:00:59.320 ==> default: Configuring and enabling network interfaces... 00:01:04.606 default: SSH address: 192.168.121.32:22 00:01:04.606 default: SSH username: vagrant 00:01:04.606 default: SSH auth method: private key 00:01:07.900 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:17.889 ==> default: Mounting SSHFS shared folder... 00:01:18.829 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:18.829 ==> default: Checking Mount.. 00:01:20.740 ==> default: Folder Successfully Mounted! 00:01:20.740 ==> default: Running provisioner: file... 00:01:21.682 default: ~/.gitconfig => .gitconfig 00:01:22.626 00:01:22.626 SUCCESS! 00:01:22.626 00:01:22.626 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:22.626 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:22.626 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:22.626 00:01:22.647 [Pipeline] } 00:01:22.666 [Pipeline] // stage 00:01:22.675 [Pipeline] dir 00:01:22.676 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:22.678 [Pipeline] { 00:01:22.695 [Pipeline] catchError 00:01:22.697 [Pipeline] { 00:01:22.711 [Pipeline] sh 00:01:23.022 + vagrant ssh-config --host vagrant 00:01:23.022 + sed -ne /^Host/,$p 00:01:23.022 + tee ssh_conf 00:01:25.564 Host vagrant 00:01:25.564 HostName 192.168.121.32 00:01:25.564 User vagrant 00:01:25.564 Port 22 00:01:25.564 UserKnownHostsFile /dev/null 00:01:25.564 StrictHostKeyChecking no 00:01:25.564 PasswordAuthentication no 00:01:25.564 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:25.564 IdentitiesOnly yes 00:01:25.564 LogLevel FATAL 00:01:25.564 ForwardAgent yes 00:01:25.564 ForwardX11 yes 00:01:25.564 00:01:25.579 [Pipeline] withEnv 00:01:25.581 [Pipeline] { 00:01:25.594 [Pipeline] sh 00:01:25.879 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:25.879 source /etc/os-release 00:01:25.879 [[ -e /image.version ]] && img=$(< /image.version) 00:01:25.879 # Minimal, systemd-like check. 00:01:25.879 if [[ -e /.dockerenv ]]; then 00:01:25.879 # Clear garbage from the node's name: 00:01:25.879 # agt-er_autotest_547-896 -> autotest_547-896 00:01:25.879 # $HOSTNAME is the actual container id 00:01:25.879 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:25.879 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:25.879 # We can assume this is a mount from a host where container is running, 00:01:25.879 # so fetch its hostname to easily identify the target swarm worker. 00:01:25.879 container="$(< /etc/hostname) ($agent)" 00:01:25.879 else 00:01:25.879 # Fallback 00:01:25.879 container=$agent 00:01:25.879 fi 00:01:25.879 fi 00:01:25.880 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:25.880 00:01:26.151 [Pipeline] } 00:01:26.167 [Pipeline] // withEnv 00:01:26.175 [Pipeline] setCustomBuildProperty 00:01:26.190 [Pipeline] stage 00:01:26.192 [Pipeline] { (Tests) 00:01:26.210 [Pipeline] sh 00:01:26.493 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:26.768 [Pipeline] sh 00:01:27.056 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:27.330 [Pipeline] timeout 00:01:27.331 Timeout set to expire in 50 min 00:01:27.332 [Pipeline] { 00:01:27.346 [Pipeline] sh 00:01:27.629 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:28.198 HEAD is now at 09ac735c8 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:01:28.211 [Pipeline] sh 00:01:28.496 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:28.772 [Pipeline] sh 00:01:29.056 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:29.333 [Pipeline] sh 00:01:29.616 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:29.875 ++ readlink -f spdk_repo 00:01:29.875 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:29.875 + [[ -n /home/vagrant/spdk_repo ]] 00:01:29.875 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:29.875 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:29.875 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:29.875 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:29.875 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:29.875 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:29.875 + cd /home/vagrant/spdk_repo 00:01:29.875 + source /etc/os-release 00:01:29.875 ++ NAME='Fedora Linux' 00:01:29.875 ++ VERSION='39 (Cloud Edition)' 00:01:29.875 ++ ID=fedora 00:01:29.875 ++ VERSION_ID=39 00:01:29.875 ++ VERSION_CODENAME= 00:01:29.875 ++ PLATFORM_ID=platform:f39 00:01:29.875 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:29.875 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:29.875 ++ LOGO=fedora-logo-icon 00:01:29.875 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:29.875 ++ HOME_URL=https://fedoraproject.org/ 00:01:29.875 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:29.875 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:29.875 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:29.875 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:29.875 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:29.876 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:29.876 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:29.876 ++ SUPPORT_END=2024-11-12 00:01:29.876 ++ VARIANT='Cloud Edition' 00:01:29.876 ++ VARIANT_ID=cloud 00:01:29.876 + uname -a 00:01:29.876 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:29.876 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:30.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:30.702 Hugepages 00:01:30.702 node hugesize free / total 00:01:30.702 node0 1048576kB 0 / 0 00:01:30.702 node0 2048kB 0 / 0 00:01:30.702 00:01:30.702 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:30.702 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:30.702 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:30.702 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:30.702 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:30.961 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:30.961 + rm -f /tmp/spdk-ld-path 00:01:30.961 + source autorun-spdk.conf 00:01:30.961 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:30.961 ++ SPDK_TEST_NVME=1 00:01:30.961 ++ SPDK_TEST_FTL=1 00:01:30.961 ++ SPDK_TEST_ISAL=1 00:01:30.961 ++ SPDK_RUN_ASAN=1 00:01:30.961 ++ SPDK_RUN_UBSAN=1 00:01:30.961 ++ SPDK_TEST_XNVME=1 00:01:30.961 ++ SPDK_TEST_NVME_FDP=1 00:01:30.961 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:30.961 ++ RUN_NIGHTLY=0 00:01:30.961 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:30.961 + [[ -n '' ]] 00:01:30.961 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:30.961 + for M in /var/spdk/build-*-manifest.txt 00:01:30.961 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:30.961 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.961 + for M in /var/spdk/build-*-manifest.txt 00:01:30.961 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:30.961 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.961 + for M in /var/spdk/build-*-manifest.txt 00:01:30.961 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:30.961 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:30.961 ++ uname 00:01:30.961 + [[ Linux == \L\i\n\u\x ]] 00:01:30.961 + sudo dmesg -T 00:01:30.961 + sudo dmesg --clear 00:01:30.961 + dmesg_pid=5255 00:01:30.961 + [[ Fedora Linux == FreeBSD ]] 00:01:30.961 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.961 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:30.961 + sudo dmesg -Tw 00:01:30.961 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:30.961 + [[ -x /usr/src/fio-static/fio ]] 00:01:30.961 + export FIO_BIN=/usr/src/fio-static/fio 00:01:30.961 + FIO_BIN=/usr/src/fio-static/fio 00:01:30.961 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:30.961 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:30.961 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:30.961 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.961 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:30.961 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:30.961 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.961 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:30.961 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.220 17:35:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:31.220 17:35:58 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.220 17:35:58 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:31.220 17:35:58 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:31.220 17:35:58 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:31.220 17:35:58 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:31.220 17:35:58 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:31.220 17:35:58 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:31.220 17:35:58 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:31.220 17:35:58 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:31.220 17:35:58 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:31.220 17:35:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.220 17:35:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.220 17:35:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.220 17:35:58 -- paths/export.sh@5 -- $ export PATH 00:01:31.220 17:35:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:31.220 17:35:58 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:31.220 17:35:58 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:31.220 17:35:58 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732124158.XXXXXX 00:01:31.220 17:35:58 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732124158.yRKA8G 00:01:31.220 17:35:58 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:31.220 17:35:58 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:31.220 17:35:58 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:31.220 17:35:58 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:31.220 17:35:58 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:31.220 17:35:58 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:31.220 17:35:58 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:31.220 17:35:58 -- common/autotest_common.sh@10 -- $ set +x 00:01:31.220 17:35:58 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:31.220 17:35:58 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:31.220 17:35:58 -- pm/common@17 -- $ local monitor 00:01:31.220 17:35:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.220 17:35:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:31.220 17:35:58 -- pm/common@25 -- $ sleep 1 00:01:31.220 17:35:58 -- pm/common@21 -- $ date +%s 00:01:31.220 17:35:58 -- pm/common@21 -- $ date +%s 00:01:31.220 17:35:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732124158 00:01:31.220 17:35:58 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732124158 00:01:31.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732124158_collect-cpu-load.pm.log 00:01:31.220 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732124158_collect-vmstat.pm.log 00:01:32.156 17:35:59 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:32.156 17:35:59 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:32.156 17:35:59 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:32.156 17:35:59 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:32.156 17:35:59 -- spdk/autobuild.sh@16 -- $ date -u 00:01:32.156 Wed Nov 20 05:35:59 PM UTC 2024 00:01:32.156 17:35:59 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:32.415 v25.01-pre-227-g09ac735c8 00:01:32.415 17:35:59 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:32.415 17:35:59 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:32.415 17:35:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.415 17:35:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.415 17:35:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.415 ************************************ 00:01:32.415 START TEST asan 00:01:32.415 ************************************ 00:01:32.415 using asan 00:01:32.415 17:35:59 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:32.415 00:01:32.415 real 0m0.001s 00:01:32.415 user 0m0.001s 00:01:32.415 sys 0m0.000s 00:01:32.415 17:35:59 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.415 17:35:59 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.415 ************************************ 00:01:32.415 END TEST asan 00:01:32.415 ************************************ 00:01:32.415 17:35:59 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:32.415 17:35:59 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:32.415 17:35:59 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:32.415 17:35:59 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:32.415 17:35:59 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.415 ************************************ 00:01:32.415 START TEST ubsan 00:01:32.415 ************************************ 00:01:32.415 using ubsan 00:01:32.415 17:35:59 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:32.415 00:01:32.415 real 0m0.000s 00:01:32.415 user 0m0.000s 00:01:32.415 sys 0m0.000s 00:01:32.415 17:35:59 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:32.415 17:35:59 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:32.415 ************************************ 00:01:32.415 END TEST ubsan 00:01:32.415 ************************************ 00:01:32.415 17:35:59 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:32.415 17:35:59 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:32.415 17:35:59 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:32.415 17:35:59 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:32.694 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:32.694 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:32.953 Using 'verbs' RDMA provider 00:01:49.270 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:07.439 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:07.439 Creating mk/config.mk...done. 00:02:07.439 Creating mk/cc.flags.mk...done. 00:02:07.439 Type 'make' to build. 00:02:07.439 17:36:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:07.439 17:36:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:07.439 17:36:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:07.439 17:36:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.439 ************************************ 00:02:07.439 START TEST make 00:02:07.439 ************************************ 00:02:07.439 17:36:32 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:07.439 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:07.439 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:07.439 meson setup builddir \ 00:02:07.439 -Dwith-libaio=enabled \ 00:02:07.439 -Dwith-liburing=enabled \ 00:02:07.439 -Dwith-libvfn=disabled \ 00:02:07.439 -Dwith-spdk=disabled \ 00:02:07.439 -Dexamples=false \ 00:02:07.439 -Dtests=false \ 00:02:07.439 -Dtools=false && \ 00:02:07.439 meson compile -C builddir && \ 00:02:07.439 cd -) 00:02:07.439 make[1]: Nothing to be done for 'all'. 00:02:08.007 The Meson build system 00:02:08.007 Version: 1.5.0 00:02:08.007 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:08.007 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:08.007 Build type: native build 00:02:08.007 Project name: xnvme 00:02:08.007 Project version: 0.7.5 00:02:08.007 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:08.007 C linker for the host machine: cc ld.bfd 2.40-14 00:02:08.007 Host machine cpu family: x86_64 00:02:08.007 Host machine cpu: x86_64 00:02:08.007 Message: host_machine.system: linux 00:02:08.007 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:08.007 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:08.007 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:08.007 Run-time dependency threads found: YES 00:02:08.007 Has header "setupapi.h" : NO 00:02:08.007 Has header "linux/blkzoned.h" : YES 00:02:08.007 Has header "linux/blkzoned.h" : YES (cached) 00:02:08.007 Has header "libaio.h" : YES 00:02:08.007 Library aio found: YES 00:02:08.007 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:08.007 Run-time dependency liburing found: YES 2.2 00:02:08.007 Dependency libvfn skipped: feature with-libvfn disabled 00:02:08.007 Found CMake: /usr/bin/cmake (3.27.7) 00:02:08.007 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:08.007 Subproject spdk : skipped: feature with-spdk disabled 00:02:08.007 Run-time dependency appleframeworks found: NO (tried framework) 00:02:08.007 Run-time dependency appleframeworks found: NO (tried framework) 00:02:08.007 Library rt found: YES 00:02:08.007 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:08.007 Configuring xnvme_config.h using configuration 00:02:08.007 Configuring xnvme.spec using configuration 00:02:08.007 Run-time dependency bash-completion found: YES 2.11 00:02:08.007 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:08.007 Program cp found: YES (/usr/bin/cp) 00:02:08.007 Build targets in project: 3 00:02:08.007 00:02:08.007 xnvme 0.7.5 00:02:08.007 00:02:08.007 Subprojects 00:02:08.007 spdk : NO Feature 'with-spdk' disabled 00:02:08.007 00:02:08.007 User defined options 00:02:08.007 examples : false 00:02:08.007 tests : false 00:02:08.007 tools : false 00:02:08.007 with-libaio : enabled 00:02:08.007 with-liburing: enabled 00:02:08.007 with-libvfn : disabled 00:02:08.007 with-spdk : disabled 00:02:08.007 00:02:08.007 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:08.576 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:08.576 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:08.576 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:08.576 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:08.576 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:08.576 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:08.576 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:08.576 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:08.576 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:08.576 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:08.576 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:08.576 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:08.576 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:08.576 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:08.835 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:08.835 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:08.835 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:08.835 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:08.835 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:08.835 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:08.835 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:08.835 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:08.835 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:08.835 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:08.835 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:08.835 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:08.835 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:08.835 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:08.835 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:08.835 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:08.835 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:08.835 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:08.835 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:08.835 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:08.835 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:08.835 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:08.835 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:08.835 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:08.835 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:08.835 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:08.835 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:08.835 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:08.835 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:08.835 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:08.835 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:08.835 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:08.835 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:09.095 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:09.095 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:09.095 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:09.095 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:09.095 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:09.095 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:09.095 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:09.095 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:09.095 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:09.095 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:09.095 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:09.095 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:09.095 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:09.095 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:09.095 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:09.095 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:09.095 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:09.095 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:09.095 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:09.095 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:09.095 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:09.095 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:09.355 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:09.355 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:09.355 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:09.355 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:09.355 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:09.614 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:09.614 [75/76] Linking static target lib/libxnvme.a 00:02:09.614 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:09.614 INFO: autodetecting backend as ninja 00:02:09.615 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:09.874 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:17.994 The Meson build system 00:02:17.994 Version: 1.5.0 00:02:17.994 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:17.994 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:17.994 Build type: native build 00:02:17.994 Program cat found: YES (/usr/bin/cat) 00:02:17.994 Project name: DPDK 00:02:17.994 Project version: 24.03.0 00:02:17.994 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:17.994 C linker for the host machine: cc ld.bfd 2.40-14 00:02:17.994 Host machine cpu family: x86_64 00:02:17.994 Host machine cpu: x86_64 00:02:17.994 Message: ## Building in Developer Mode ## 00:02:17.994 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:17.994 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:17.994 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:17.994 Program python3 found: YES (/usr/bin/python3) 00:02:17.994 Program cat found: YES (/usr/bin/cat) 00:02:17.994 Compiler for C supports arguments -march=native: YES 00:02:17.994 Checking for size of "void *" : 8 00:02:17.994 Checking for size of "void *" : 8 (cached) 00:02:17.994 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:17.994 Library m found: YES 00:02:17.994 Library numa found: YES 00:02:17.994 Has header "numaif.h" : YES 00:02:17.994 Library fdt found: NO 00:02:17.994 Library execinfo found: NO 00:02:17.994 Has header "execinfo.h" : YES 00:02:17.994 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:17.994 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:17.994 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:17.994 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:17.994 Run-time dependency openssl found: YES 3.1.1 00:02:17.994 Run-time dependency libpcap found: YES 1.10.4 00:02:17.994 Has header "pcap.h" with dependency libpcap: YES 00:02:17.994 Compiler for C supports arguments -Wcast-qual: YES 00:02:17.994 Compiler for C supports arguments -Wdeprecated: YES 00:02:17.994 Compiler for C supports arguments -Wformat: YES 00:02:17.994 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:17.994 Compiler for C supports arguments -Wformat-security: NO 00:02:17.994 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:17.994 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:17.994 Compiler for C supports arguments -Wnested-externs: YES 00:02:17.994 Compiler for C supports arguments -Wold-style-definition: YES 00:02:17.994 Compiler for C supports arguments -Wpointer-arith: YES 00:02:17.994 Compiler for C supports arguments -Wsign-compare: YES 00:02:17.994 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:17.994 Compiler for C supports arguments -Wundef: YES 00:02:17.994 Compiler for C supports arguments -Wwrite-strings: YES 00:02:17.994 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:17.994 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:17.994 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:17.994 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:17.994 Program objdump found: YES (/usr/bin/objdump) 00:02:17.994 Compiler for C supports arguments -mavx512f: YES 00:02:17.994 Checking if "AVX512 checking" compiles: YES 00:02:17.994 Fetching value of define "__SSE4_2__" : 1 00:02:17.994 Fetching value of define "__AES__" : 1 00:02:17.994 Fetching value of define "__AVX__" : 1 00:02:17.994 Fetching value of define "__AVX2__" : 1 00:02:17.994 Fetching value of define "__AVX512BW__" : 1 00:02:17.994 Fetching value of define "__AVX512CD__" : 1 00:02:17.994 Fetching value of define "__AVX512DQ__" : 1 00:02:17.994 Fetching value of define "__AVX512F__" : 1 00:02:17.994 Fetching value of define "__AVX512VL__" : 1 00:02:17.994 Fetching value of define "__PCLMUL__" : 1 00:02:17.994 Fetching value of define "__RDRND__" : 1 00:02:17.994 Fetching value of define "__RDSEED__" : 1 00:02:17.994 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:17.994 Fetching value of define "__znver1__" : (undefined) 00:02:17.994 Fetching value of define "__znver2__" : (undefined) 00:02:17.994 Fetching value of define "__znver3__" : (undefined) 00:02:17.994 Fetching value of define "__znver4__" : (undefined) 00:02:17.994 Library asan found: YES 00:02:17.994 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:17.994 Message: lib/log: Defining dependency "log" 00:02:17.994 Message: lib/kvargs: Defining dependency "kvargs" 00:02:17.994 Message: lib/telemetry: Defining dependency "telemetry" 00:02:17.994 Library rt found: YES 00:02:17.994 Checking for function "getentropy" : NO 00:02:17.994 Message: lib/eal: Defining dependency "eal" 00:02:17.994 Message: lib/ring: Defining dependency "ring" 00:02:17.994 Message: lib/rcu: Defining dependency "rcu" 00:02:17.994 Message: lib/mempool: Defining dependency "mempool" 00:02:17.994 Message: lib/mbuf: Defining dependency "mbuf" 00:02:17.994 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:17.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:17.994 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:17.994 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:17.994 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:17.994 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:17.994 Compiler for C supports arguments -mpclmul: YES 00:02:17.994 Compiler for C supports arguments -maes: YES 00:02:17.994 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:17.994 Compiler for C supports arguments -mavx512bw: YES 00:02:17.994 Compiler for C supports arguments -mavx512dq: YES 00:02:17.994 Compiler for C supports arguments -mavx512vl: YES 00:02:17.994 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:17.994 Compiler for C supports arguments -mavx2: YES 00:02:17.994 Compiler for C supports arguments -mavx: YES 00:02:17.994 Message: lib/net: Defining dependency "net" 00:02:17.994 Message: lib/meter: Defining dependency "meter" 00:02:17.994 Message: lib/ethdev: Defining dependency "ethdev" 00:02:17.994 Message: lib/pci: Defining dependency "pci" 00:02:17.994 Message: lib/cmdline: Defining dependency "cmdline" 00:02:17.994 Message: lib/hash: Defining dependency "hash" 00:02:17.994 Message: lib/timer: Defining dependency "timer" 00:02:17.994 Message: lib/compressdev: Defining dependency "compressdev" 00:02:17.994 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:17.994 Message: lib/dmadev: Defining dependency "dmadev" 00:02:17.994 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:17.994 Message: lib/power: Defining dependency "power" 00:02:17.994 Message: lib/reorder: Defining dependency "reorder" 00:02:17.994 Message: lib/security: Defining dependency "security" 00:02:17.994 Has header "linux/userfaultfd.h" : YES 00:02:17.994 Has header "linux/vduse.h" : YES 00:02:17.994 Message: lib/vhost: Defining dependency "vhost" 00:02:17.994 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:17.994 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:17.994 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:17.994 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:17.994 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:17.994 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:17.994 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:17.994 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:17.994 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:17.994 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:17.994 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:17.994 Configuring doxy-api-html.conf using configuration 00:02:17.994 Configuring doxy-api-man.conf using configuration 00:02:17.994 Program mandb found: YES (/usr/bin/mandb) 00:02:17.994 Program sphinx-build found: NO 00:02:17.994 Configuring rte_build_config.h using configuration 00:02:17.994 Message: 00:02:17.994 ================= 00:02:17.994 Applications Enabled 00:02:17.994 ================= 00:02:17.994 00:02:17.994 apps: 00:02:17.994 00:02:17.994 00:02:17.994 Message: 00:02:17.994 ================= 00:02:17.994 Libraries Enabled 00:02:17.994 ================= 00:02:17.994 00:02:17.994 libs: 00:02:17.994 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:17.994 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:17.994 cryptodev, dmadev, power, reorder, security, vhost, 00:02:17.994 00:02:17.994 Message: 00:02:17.994 =============== 00:02:17.994 Drivers Enabled 00:02:17.995 =============== 00:02:17.995 00:02:17.995 common: 00:02:17.995 00:02:17.995 bus: 00:02:17.995 pci, vdev, 00:02:17.995 mempool: 00:02:17.995 ring, 00:02:17.995 dma: 00:02:17.995 00:02:17.995 net: 00:02:17.995 00:02:17.995 crypto: 00:02:17.995 00:02:17.995 compress: 00:02:17.995 00:02:17.995 vdpa: 00:02:17.995 00:02:17.995 00:02:17.995 Message: 00:02:17.995 ================= 00:02:17.995 Content Skipped 00:02:17.995 ================= 00:02:17.995 00:02:17.995 apps: 00:02:17.995 dumpcap: explicitly disabled via build config 00:02:17.995 graph: explicitly disabled via build config 00:02:17.995 pdump: explicitly disabled via build config 00:02:17.995 proc-info: explicitly disabled via build config 00:02:17.995 test-acl: explicitly disabled via build config 00:02:17.995 test-bbdev: explicitly disabled via build config 00:02:17.995 test-cmdline: explicitly disabled via build config 00:02:17.995 test-compress-perf: explicitly disabled via build config 00:02:17.995 test-crypto-perf: explicitly disabled via build config 00:02:17.995 test-dma-perf: explicitly disabled via build config 00:02:17.995 test-eventdev: explicitly disabled via build config 00:02:17.995 test-fib: explicitly disabled via build config 00:02:17.995 test-flow-perf: explicitly disabled via build config 00:02:17.995 test-gpudev: explicitly disabled via build config 00:02:17.995 test-mldev: explicitly disabled via build config 00:02:17.995 test-pipeline: explicitly disabled via build config 00:02:17.995 test-pmd: explicitly disabled via build config 00:02:17.995 test-regex: explicitly disabled via build config 00:02:17.995 test-sad: explicitly disabled via build config 00:02:17.995 test-security-perf: explicitly disabled via build config 00:02:17.995 00:02:17.995 libs: 00:02:17.995 argparse: explicitly disabled via build config 00:02:17.995 metrics: explicitly disabled via build config 00:02:17.995 acl: explicitly disabled via build config 00:02:17.995 bbdev: explicitly disabled via build config 00:02:17.995 bitratestats: explicitly disabled via build config 00:02:17.995 bpf: explicitly disabled via build config 00:02:17.995 cfgfile: explicitly disabled via build config 00:02:17.995 distributor: explicitly disabled via build config 00:02:17.995 efd: explicitly disabled via build config 00:02:17.995 eventdev: explicitly disabled via build config 00:02:17.995 dispatcher: explicitly disabled via build config 00:02:17.995 gpudev: explicitly disabled via build config 00:02:17.995 gro: explicitly disabled via build config 00:02:17.995 gso: explicitly disabled via build config 00:02:17.995 ip_frag: explicitly disabled via build config 00:02:17.995 jobstats: explicitly disabled via build config 00:02:17.995 latencystats: explicitly disabled via build config 00:02:17.995 lpm: explicitly disabled via build config 00:02:17.995 member: explicitly disabled via build config 00:02:17.995 pcapng: explicitly disabled via build config 00:02:17.995 rawdev: explicitly disabled via build config 00:02:17.995 regexdev: explicitly disabled via build config 00:02:17.995 mldev: explicitly disabled via build config 00:02:17.995 rib: explicitly disabled via build config 00:02:17.995 sched: explicitly disabled via build config 00:02:17.995 stack: explicitly disabled via build config 00:02:17.995 ipsec: explicitly disabled via build config 00:02:17.995 pdcp: explicitly disabled via build config 00:02:17.995 fib: explicitly disabled via build config 00:02:17.995 port: explicitly disabled via build config 00:02:17.995 pdump: explicitly disabled via build config 00:02:17.995 table: explicitly disabled via build config 00:02:17.995 pipeline: explicitly disabled via build config 00:02:17.995 graph: explicitly disabled via build config 00:02:17.995 node: explicitly disabled via build config 00:02:17.995 00:02:17.995 drivers: 00:02:17.995 common/cpt: not in enabled drivers build config 00:02:17.995 common/dpaax: not in enabled drivers build config 00:02:17.995 common/iavf: not in enabled drivers build config 00:02:17.995 common/idpf: not in enabled drivers build config 00:02:17.995 common/ionic: not in enabled drivers build config 00:02:17.995 common/mvep: not in enabled drivers build config 00:02:17.995 common/octeontx: not in enabled drivers build config 00:02:17.995 bus/auxiliary: not in enabled drivers build config 00:02:17.995 bus/cdx: not in enabled drivers build config 00:02:17.995 bus/dpaa: not in enabled drivers build config 00:02:17.995 bus/fslmc: not in enabled drivers build config 00:02:17.995 bus/ifpga: not in enabled drivers build config 00:02:17.995 bus/platform: not in enabled drivers build config 00:02:17.995 bus/uacce: not in enabled drivers build config 00:02:17.995 bus/vmbus: not in enabled drivers build config 00:02:17.995 common/cnxk: not in enabled drivers build config 00:02:17.995 common/mlx5: not in enabled drivers build config 00:02:17.995 common/nfp: not in enabled drivers build config 00:02:17.995 common/nitrox: not in enabled drivers build config 00:02:17.995 common/qat: not in enabled drivers build config 00:02:17.995 common/sfc_efx: not in enabled drivers build config 00:02:17.995 mempool/bucket: not in enabled drivers build config 00:02:17.995 mempool/cnxk: not in enabled drivers build config 00:02:17.995 mempool/dpaa: not in enabled drivers build config 00:02:17.995 mempool/dpaa2: not in enabled drivers build config 00:02:17.995 mempool/octeontx: not in enabled drivers build config 00:02:17.995 mempool/stack: not in enabled drivers build config 00:02:17.995 dma/cnxk: not in enabled drivers build config 00:02:17.995 dma/dpaa: not in enabled drivers build config 00:02:17.995 dma/dpaa2: not in enabled drivers build config 00:02:17.995 dma/hisilicon: not in enabled drivers build config 00:02:17.995 dma/idxd: not in enabled drivers build config 00:02:17.995 dma/ioat: not in enabled drivers build config 00:02:17.995 dma/skeleton: not in enabled drivers build config 00:02:17.995 net/af_packet: not in enabled drivers build config 00:02:17.995 net/af_xdp: not in enabled drivers build config 00:02:17.995 net/ark: not in enabled drivers build config 00:02:17.995 net/atlantic: not in enabled drivers build config 00:02:17.995 net/avp: not in enabled drivers build config 00:02:17.995 net/axgbe: not in enabled drivers build config 00:02:17.995 net/bnx2x: not in enabled drivers build config 00:02:17.995 net/bnxt: not in enabled drivers build config 00:02:17.995 net/bonding: not in enabled drivers build config 00:02:17.995 net/cnxk: not in enabled drivers build config 00:02:17.995 net/cpfl: not in enabled drivers build config 00:02:17.995 net/cxgbe: not in enabled drivers build config 00:02:17.995 net/dpaa: not in enabled drivers build config 00:02:17.995 net/dpaa2: not in enabled drivers build config 00:02:17.995 net/e1000: not in enabled drivers build config 00:02:17.995 net/ena: not in enabled drivers build config 00:02:17.995 net/enetc: not in enabled drivers build config 00:02:17.995 net/enetfec: not in enabled drivers build config 00:02:17.995 net/enic: not in enabled drivers build config 00:02:17.995 net/failsafe: not in enabled drivers build config 00:02:17.995 net/fm10k: not in enabled drivers build config 00:02:17.995 net/gve: not in enabled drivers build config 00:02:17.995 net/hinic: not in enabled drivers build config 00:02:17.995 net/hns3: not in enabled drivers build config 00:02:17.995 net/i40e: not in enabled drivers build config 00:02:17.995 net/iavf: not in enabled drivers build config 00:02:17.995 net/ice: not in enabled drivers build config 00:02:17.995 net/idpf: not in enabled drivers build config 00:02:17.995 net/igc: not in enabled drivers build config 00:02:17.995 net/ionic: not in enabled drivers build config 00:02:17.995 net/ipn3ke: not in enabled drivers build config 00:02:17.995 net/ixgbe: not in enabled drivers build config 00:02:17.995 net/mana: not in enabled drivers build config 00:02:17.995 net/memif: not in enabled drivers build config 00:02:17.995 net/mlx4: not in enabled drivers build config 00:02:17.995 net/mlx5: not in enabled drivers build config 00:02:17.995 net/mvneta: not in enabled drivers build config 00:02:17.995 net/mvpp2: not in enabled drivers build config 00:02:17.995 net/netvsc: not in enabled drivers build config 00:02:17.995 net/nfb: not in enabled drivers build config 00:02:17.995 net/nfp: not in enabled drivers build config 00:02:17.995 net/ngbe: not in enabled drivers build config 00:02:17.995 net/null: not in enabled drivers build config 00:02:17.995 net/octeontx: not in enabled drivers build config 00:02:17.995 net/octeon_ep: not in enabled drivers build config 00:02:17.995 net/pcap: not in enabled drivers build config 00:02:17.995 net/pfe: not in enabled drivers build config 00:02:17.995 net/qede: not in enabled drivers build config 00:02:17.995 net/ring: not in enabled drivers build config 00:02:17.995 net/sfc: not in enabled drivers build config 00:02:17.995 net/softnic: not in enabled drivers build config 00:02:17.995 net/tap: not in enabled drivers build config 00:02:17.995 net/thunderx: not in enabled drivers build config 00:02:17.995 net/txgbe: not in enabled drivers build config 00:02:17.995 net/vdev_netvsc: not in enabled drivers build config 00:02:17.995 net/vhost: not in enabled drivers build config 00:02:17.995 net/virtio: not in enabled drivers build config 00:02:17.995 net/vmxnet3: not in enabled drivers build config 00:02:17.995 raw/*: missing internal dependency, "rawdev" 00:02:17.995 crypto/armv8: not in enabled drivers build config 00:02:17.995 crypto/bcmfs: not in enabled drivers build config 00:02:17.995 crypto/caam_jr: not in enabled drivers build config 00:02:17.995 crypto/ccp: not in enabled drivers build config 00:02:17.995 crypto/cnxk: not in enabled drivers build config 00:02:17.995 crypto/dpaa_sec: not in enabled drivers build config 00:02:17.995 crypto/dpaa2_sec: not in enabled drivers build config 00:02:17.995 crypto/ipsec_mb: not in enabled drivers build config 00:02:17.995 crypto/mlx5: not in enabled drivers build config 00:02:17.995 crypto/mvsam: not in enabled drivers build config 00:02:17.995 crypto/nitrox: not in enabled drivers build config 00:02:17.995 crypto/null: not in enabled drivers build config 00:02:17.995 crypto/octeontx: not in enabled drivers build config 00:02:17.995 crypto/openssl: not in enabled drivers build config 00:02:17.995 crypto/scheduler: not in enabled drivers build config 00:02:17.995 crypto/uadk: not in enabled drivers build config 00:02:17.995 crypto/virtio: not in enabled drivers build config 00:02:17.995 compress/isal: not in enabled drivers build config 00:02:17.995 compress/mlx5: not in enabled drivers build config 00:02:17.995 compress/nitrox: not in enabled drivers build config 00:02:17.996 compress/octeontx: not in enabled drivers build config 00:02:17.996 compress/zlib: not in enabled drivers build config 00:02:17.996 regex/*: missing internal dependency, "regexdev" 00:02:17.996 ml/*: missing internal dependency, "mldev" 00:02:17.996 vdpa/ifc: not in enabled drivers build config 00:02:17.996 vdpa/mlx5: not in enabled drivers build config 00:02:17.996 vdpa/nfp: not in enabled drivers build config 00:02:17.996 vdpa/sfc: not in enabled drivers build config 00:02:17.996 event/*: missing internal dependency, "eventdev" 00:02:17.996 baseband/*: missing internal dependency, "bbdev" 00:02:17.996 gpu/*: missing internal dependency, "gpudev" 00:02:17.996 00:02:17.996 00:02:17.996 Build targets in project: 85 00:02:17.996 00:02:17.996 DPDK 24.03.0 00:02:17.996 00:02:17.996 User defined options 00:02:17.996 buildtype : debug 00:02:17.996 default_library : shared 00:02:17.996 libdir : lib 00:02:17.996 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:17.996 b_sanitize : address 00:02:17.996 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:17.996 c_link_args : 00:02:17.996 cpu_instruction_set: native 00:02:17.996 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:17.996 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:17.996 enable_docs : false 00:02:17.996 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:17.996 enable_kmods : false 00:02:17.996 max_lcores : 128 00:02:17.996 tests : false 00:02:17.996 00:02:17.996 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:17.996 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:17.996 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:17.996 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:17.996 [3/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:17.996 [4/268] Linking static target lib/librte_kvargs.a 00:02:17.996 [5/268] Linking static target lib/librte_log.a 00:02:17.996 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:17.996 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:17.996 [8/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:17.996 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:17.996 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:17.996 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:17.996 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.996 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:17.996 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:17.996 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:17.996 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:17.996 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:17.996 [18/268] Linking static target lib/librte_telemetry.a 00:02:18.255 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:18.513 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:18.513 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:18.513 [22/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.513 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:18.513 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:18.513 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:18.513 [26/268] Linking target lib/librte_log.so.24.1 00:02:18.513 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:18.513 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:18.772 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:18.772 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:18.772 [31/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:18.772 [32/268] Linking target lib/librte_kvargs.so.24.1 00:02:18.772 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:19.031 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:19.031 [35/268] Linking target lib/librte_telemetry.so.24.1 00:02:19.031 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:19.031 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:19.031 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:19.031 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:19.031 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:19.031 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:19.335 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:19.335 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:19.335 [44/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:19.335 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:19.335 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:19.335 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:19.335 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:19.618 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:19.618 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:19.618 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:19.619 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:19.619 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:19.877 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:19.877 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:19.877 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:19.877 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:19.877 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:20.136 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:20.136 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:20.136 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:20.136 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:20.136 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:20.136 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:20.394 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:20.394 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:20.394 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:20.394 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:20.652 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:20.652 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:20.652 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:20.652 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:20.652 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:20.911 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:20.911 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:20.911 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:20.911 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:20.911 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:20.911 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:20.911 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:21.170 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:21.170 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:21.170 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:21.170 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:21.170 [85/268] Linking static target lib/librte_eal.a 00:02:21.429 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:21.429 [87/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:21.429 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:21.429 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:21.429 [90/268] Linking static target lib/librte_ring.a 00:02:21.429 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:21.429 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:21.429 [93/268] Linking static target lib/librte_rcu.a 00:02:21.687 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:21.687 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:21.687 [96/268] Linking static target lib/librte_mempool.a 00:02:21.687 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:21.944 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:21.944 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:21.944 [100/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.945 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:21.945 [102/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.945 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:22.201 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:22.201 [105/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:22.201 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:22.201 [107/268] Linking static target lib/librte_meter.a 00:02:22.459 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:22.459 [109/268] Linking static target lib/librte_net.a 00:02:22.459 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:22.459 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:22.459 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:22.718 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:22.718 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.718 [115/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:22.718 [116/268] Linking static target lib/librte_mbuf.a 00:02:22.718 [117/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.977 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.977 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:22.977 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:23.237 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:23.237 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:23.237 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:23.496 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:23.496 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:23.756 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.756 [127/268] Linking static target lib/librte_pci.a 00:02:23.756 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.756 [129/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.756 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:23.756 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:23.756 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:23.756 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.014 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.014 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.014 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.014 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.014 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.014 [139/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.014 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.014 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.014 [142/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.014 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.332 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.332 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.332 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.332 [147/268] Linking static target lib/librte_cmdline.a 00:02:24.332 [148/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.594 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.594 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.594 [151/268] Linking static target lib/librte_timer.a 00:02:24.594 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.594 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.852 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.852 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.852 [156/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.110 [157/268] Linking static target lib/librte_ethdev.a 00:02:25.110 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.110 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.110 [160/268] Linking static target lib/librte_compressdev.a 00:02:25.110 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:25.368 [162/268] Linking static target lib/librte_hash.a 00:02:25.368 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.368 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.368 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.368 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:25.368 [167/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.368 [168/268] Linking static target lib/librte_dmadev.a 00:02:25.368 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.626 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.885 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.885 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:25.885 [173/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.885 [174/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.144 [175/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.144 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:26.402 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:26.402 [178/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:26.402 [179/268] Linking static target lib/librte_cryptodev.a 00:02:26.402 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:26.402 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.402 [182/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:26.402 [183/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.402 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:26.660 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:26.660 [186/268] Linking static target lib/librte_power.a 00:02:26.919 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.919 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.919 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.919 [190/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.919 [191/268] Linking static target lib/librte_reorder.a 00:02:27.178 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:27.178 [193/268] Linking static target lib/librte_security.a 00:02:27.437 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.437 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.695 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.954 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.954 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.954 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:27.954 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:28.213 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:28.213 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:28.472 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:28.472 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:28.472 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:28.730 [206/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:28.730 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:28.730 [208/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:28.730 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:28.730 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:28.730 [211/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.730 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:28.730 [213/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.730 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:28.989 [215/268] Linking static target drivers/librte_bus_vdev.a 00:02:28.989 [216/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:28.989 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.989 [218/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:28.989 [219/268] Linking static target drivers/librte_bus_pci.a 00:02:29.248 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:29.248 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:29.248 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.248 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:29.248 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.248 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:29.248 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:29.506 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.074 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:34.268 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.268 [230/268] Linking target lib/librte_eal.so.24.1 00:02:34.268 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.268 [232/268] Linking target lib/librte_meter.so.24.1 00:02:34.268 [233/268] Linking target lib/librte_timer.so.24.1 00:02:34.268 [234/268] Linking target lib/librte_pci.so.24.1 00:02:34.268 [235/268] Linking target lib/librte_ring.so.24.1 00:02:34.268 [236/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.268 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.268 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.268 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.268 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.268 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.268 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.268 [243/268] Linking target lib/librte_rcu.so.24.1 00:02:34.268 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:34.268 [245/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.268 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.268 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.268 [248/268] Linking target lib/librte_mbuf.so.24.1 00:02:34.268 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.268 [250/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.268 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:34.268 [252/268] Linking target lib/librte_reorder.so.24.1 00:02:34.268 [253/268] Linking target lib/librte_net.so.24.1 00:02:34.268 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:34.268 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:34.528 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:34.528 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:34.528 [258/268] Linking target lib/librte_hash.so.24.1 00:02:34.528 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:34.528 [260/268] Linking target lib/librte_ethdev.so.24.1 00:02:34.528 [261/268] Linking target lib/librte_security.so.24.1 00:02:34.788 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:34.788 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:34.788 [264/268] Linking target lib/librte_power.so.24.1 00:02:35.382 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:35.382 [266/268] Linking static target lib/librte_vhost.a 00:02:37.917 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.917 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:37.917 INFO: autodetecting backend as ninja 00:02:37.917 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:50.145 CC lib/ut/ut.o 00:02:50.145 CC lib/log/log.o 00:02:50.145 CC lib/log/log_deprecated.o 00:02:50.145 CC lib/log/log_flags.o 00:02:50.145 CC lib/ut_mock/mock.o 00:02:50.405 LIB libspdk_ut_mock.a 00:02:50.405 LIB libspdk_log.a 00:02:50.405 LIB libspdk_ut.a 00:02:50.405 SO libspdk_ut_mock.so.6.0 00:02:50.405 SO libspdk_log.so.7.1 00:02:50.405 SO libspdk_ut.so.2.0 00:02:50.405 SYMLINK libspdk_ut_mock.so 00:02:50.405 SYMLINK libspdk_ut.so 00:02:50.664 SYMLINK libspdk_log.so 00:02:50.923 CXX lib/trace_parser/trace.o 00:02:50.923 CC lib/ioat/ioat.o 00:02:50.923 CC lib/dma/dma.o 00:02:50.923 CC lib/util/bit_array.o 00:02:50.923 CC lib/util/base64.o 00:02:50.923 CC lib/util/cpuset.o 00:02:50.923 CC lib/util/crc16.o 00:02:50.923 CC lib/util/crc32c.o 00:02:50.923 CC lib/util/crc32.o 00:02:50.923 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.923 CC lib/vfio_user/host/vfio_user.o 00:02:50.923 CC lib/util/crc32_ieee.o 00:02:50.923 CC lib/util/crc64.o 00:02:50.923 LIB libspdk_dma.a 00:02:50.923 CC lib/util/dif.o 00:02:50.923 SO libspdk_dma.so.5.0 00:02:51.182 CC lib/util/fd.o 00:02:51.182 CC lib/util/fd_group.o 00:02:51.182 CC lib/util/file.o 00:02:51.182 LIB libspdk_ioat.a 00:02:51.182 SYMLINK libspdk_dma.so 00:02:51.182 CC lib/util/hexlify.o 00:02:51.182 CC lib/util/iov.o 00:02:51.182 SO libspdk_ioat.so.7.0 00:02:51.182 CC lib/util/math.o 00:02:51.182 SYMLINK libspdk_ioat.so 00:02:51.182 CC lib/util/net.o 00:02:51.182 LIB libspdk_vfio_user.a 00:02:51.182 CC lib/util/pipe.o 00:02:51.182 SO libspdk_vfio_user.so.5.0 00:02:51.182 CC lib/util/strerror_tls.o 00:02:51.182 CC lib/util/string.o 00:02:51.182 SYMLINK libspdk_vfio_user.so 00:02:51.182 CC lib/util/uuid.o 00:02:51.182 CC lib/util/xor.o 00:02:51.441 CC lib/util/zipf.o 00:02:51.441 CC lib/util/md5.o 00:02:51.700 LIB libspdk_util.a 00:02:52.004 SO libspdk_util.so.10.1 00:02:52.004 LIB libspdk_trace_parser.a 00:02:52.004 SO libspdk_trace_parser.so.6.0 00:02:52.004 SYMLINK libspdk_util.so 00:02:52.004 SYMLINK libspdk_trace_parser.so 00:02:52.262 CC lib/vmd/led.o 00:02:52.262 CC lib/vmd/vmd.o 00:02:52.262 CC lib/json/json_parse.o 00:02:52.262 CC lib/json/json_util.o 00:02:52.262 CC lib/json/json_write.o 00:02:52.262 CC lib/conf/conf.o 00:02:52.262 CC lib/rdma_utils/rdma_utils.o 00:02:52.262 CC lib/idxd/idxd.o 00:02:52.262 CC lib/idxd/idxd_user.o 00:02:52.262 CC lib/env_dpdk/env.o 00:02:52.262 CC lib/env_dpdk/memory.o 00:02:52.521 CC lib/env_dpdk/pci.o 00:02:52.521 LIB libspdk_conf.a 00:02:52.521 CC lib/env_dpdk/init.o 00:02:52.521 CC lib/idxd/idxd_kernel.o 00:02:52.521 SO libspdk_conf.so.6.0 00:02:52.521 LIB libspdk_rdma_utils.a 00:02:52.521 LIB libspdk_json.a 00:02:52.521 SO libspdk_rdma_utils.so.1.0 00:02:52.521 SYMLINK libspdk_conf.so 00:02:52.521 SO libspdk_json.so.6.0 00:02:52.521 CC lib/env_dpdk/threads.o 00:02:52.521 SYMLINK libspdk_rdma_utils.so 00:02:52.521 CC lib/env_dpdk/pci_ioat.o 00:02:52.521 SYMLINK libspdk_json.so 00:02:52.521 CC lib/env_dpdk/pci_virtio.o 00:02:52.780 CC lib/env_dpdk/pci_vmd.o 00:02:52.780 CC lib/env_dpdk/pci_idxd.o 00:02:52.780 CC lib/env_dpdk/pci_event.o 00:02:52.780 CC lib/rdma_provider/common.o 00:02:52.780 CC lib/env_dpdk/sigbus_handler.o 00:02:52.780 CC lib/env_dpdk/pci_dpdk.o 00:02:52.780 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.780 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.780 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.780 LIB libspdk_idxd.a 00:02:52.780 SO libspdk_idxd.so.12.1 00:02:52.780 LIB libspdk_vmd.a 00:02:53.038 SO libspdk_vmd.so.6.0 00:02:53.038 SYMLINK libspdk_idxd.so 00:02:53.038 SYMLINK libspdk_vmd.so 00:02:53.038 LIB libspdk_rdma_provider.a 00:02:53.038 SO libspdk_rdma_provider.so.7.0 00:02:53.038 SYMLINK libspdk_rdma_provider.so 00:02:53.296 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.296 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.297 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.297 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.555 LIB libspdk_jsonrpc.a 00:02:53.555 SO libspdk_jsonrpc.so.6.0 00:02:53.555 SYMLINK libspdk_jsonrpc.so 00:02:53.815 LIB libspdk_env_dpdk.a 00:02:54.074 SO libspdk_env_dpdk.so.15.1 00:02:54.074 CC lib/rpc/rpc.o 00:02:54.074 SYMLINK libspdk_env_dpdk.so 00:02:54.331 LIB libspdk_rpc.a 00:02:54.331 SO libspdk_rpc.so.6.0 00:02:54.332 SYMLINK libspdk_rpc.so 00:02:54.898 CC lib/notify/notify_rpc.o 00:02:54.898 CC lib/notify/notify.o 00:02:54.898 CC lib/keyring/keyring.o 00:02:54.898 CC lib/keyring/keyring_rpc.o 00:02:54.898 CC lib/trace/trace.o 00:02:54.898 CC lib/trace/trace_flags.o 00:02:54.898 CC lib/trace/trace_rpc.o 00:02:54.898 LIB libspdk_notify.a 00:02:55.156 SO libspdk_notify.so.6.0 00:02:55.156 LIB libspdk_keyring.a 00:02:55.156 SYMLINK libspdk_notify.so 00:02:55.156 SO libspdk_keyring.so.2.0 00:02:55.156 LIB libspdk_trace.a 00:02:55.156 SO libspdk_trace.so.11.0 00:02:55.156 SYMLINK libspdk_keyring.so 00:02:55.413 SYMLINK libspdk_trace.so 00:02:55.671 CC lib/thread/iobuf.o 00:02:55.671 CC lib/thread/thread.o 00:02:55.671 CC lib/sock/sock.o 00:02:55.671 CC lib/sock/sock_rpc.o 00:02:56.238 LIB libspdk_sock.a 00:02:56.238 SO libspdk_sock.so.10.0 00:02:56.238 SYMLINK libspdk_sock.so 00:02:56.820 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:56.820 CC lib/nvme/nvme_ctrlr.o 00:02:56.820 CC lib/nvme/nvme_fabric.o 00:02:56.820 CC lib/nvme/nvme_ns.o 00:02:56.820 CC lib/nvme/nvme_ns_cmd.o 00:02:56.820 CC lib/nvme/nvme_pcie_common.o 00:02:56.820 CC lib/nvme/nvme_pcie.o 00:02:56.820 CC lib/nvme/nvme_qpair.o 00:02:56.820 CC lib/nvme/nvme.o 00:02:57.388 LIB libspdk_thread.a 00:02:57.388 SO libspdk_thread.so.11.0 00:02:57.388 CC lib/nvme/nvme_quirks.o 00:02:57.388 CC lib/nvme/nvme_transport.o 00:02:57.388 SYMLINK libspdk_thread.so 00:02:57.388 CC lib/nvme/nvme_discovery.o 00:02:57.388 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:57.648 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:57.648 CC lib/nvme/nvme_tcp.o 00:02:57.648 CC lib/nvme/nvme_opal.o 00:02:57.648 CC lib/nvme/nvme_io_msg.o 00:02:57.908 CC lib/nvme/nvme_poll_group.o 00:02:57.908 CC lib/nvme/nvme_zns.o 00:02:57.908 CC lib/nvme/nvme_stubs.o 00:02:58.166 CC lib/nvme/nvme_auth.o 00:02:58.166 CC lib/accel/accel.o 00:02:58.166 CC lib/accel/accel_rpc.o 00:02:58.424 CC lib/blob/blobstore.o 00:02:58.425 CC lib/blob/request.o 00:02:58.425 CC lib/init/json_config.o 00:02:58.425 CC lib/init/subsystem.o 00:02:58.425 CC lib/init/subsystem_rpc.o 00:02:58.425 CC lib/blob/zeroes.o 00:02:58.682 CC lib/blob/blob_bs_dev.o 00:02:58.682 CC lib/nvme/nvme_cuse.o 00:02:58.682 CC lib/nvme/nvme_rdma.o 00:02:58.682 CC lib/init/rpc.o 00:02:58.942 LIB libspdk_init.a 00:02:58.942 SO libspdk_init.so.6.0 00:02:58.942 CC lib/accel/accel_sw.o 00:02:58.942 CC lib/virtio/virtio.o 00:02:58.942 SYMLINK libspdk_init.so 00:02:58.942 CC lib/virtio/virtio_vhost_user.o 00:02:59.200 CC lib/fsdev/fsdev.o 00:02:59.200 CC lib/fsdev/fsdev_io.o 00:02:59.200 CC lib/virtio/virtio_vfio_user.o 00:02:59.200 CC lib/event/app.o 00:02:59.458 LIB libspdk_accel.a 00:02:59.458 CC lib/event/reactor.o 00:02:59.458 SO libspdk_accel.so.16.0 00:02:59.458 CC lib/event/log_rpc.o 00:02:59.458 SYMLINK libspdk_accel.so 00:02:59.458 CC lib/virtio/virtio_pci.o 00:02:59.458 CC lib/event/app_rpc.o 00:02:59.717 CC lib/event/scheduler_static.o 00:02:59.717 CC lib/fsdev/fsdev_rpc.o 00:02:59.717 CC lib/bdev/bdev_rpc.o 00:02:59.717 CC lib/bdev/bdev.o 00:02:59.717 CC lib/bdev/bdev_zone.o 00:02:59.717 CC lib/bdev/part.o 00:02:59.976 LIB libspdk_virtio.a 00:02:59.976 CC lib/bdev/scsi_nvme.o 00:02:59.976 LIB libspdk_event.a 00:02:59.976 SO libspdk_virtio.so.7.0 00:02:59.976 LIB libspdk_fsdev.a 00:02:59.976 SO libspdk_event.so.14.0 00:02:59.976 SO libspdk_fsdev.so.2.0 00:02:59.976 SYMLINK libspdk_virtio.so 00:02:59.976 SYMLINK libspdk_fsdev.so 00:02:59.976 SYMLINK libspdk_event.so 00:03:00.235 LIB libspdk_nvme.a 00:03:00.235 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:00.494 SO libspdk_nvme.so.15.0 00:03:00.753 SYMLINK libspdk_nvme.so 00:03:01.011 LIB libspdk_fuse_dispatcher.a 00:03:01.011 SO libspdk_fuse_dispatcher.so.1.0 00:03:01.315 SYMLINK libspdk_fuse_dispatcher.so 00:03:02.293 LIB libspdk_blob.a 00:03:02.293 SO libspdk_blob.so.11.0 00:03:02.293 SYMLINK libspdk_blob.so 00:03:02.587 CC lib/blobfs/tree.o 00:03:02.587 CC lib/blobfs/blobfs.o 00:03:02.587 CC lib/lvol/lvol.o 00:03:02.846 LIB libspdk_bdev.a 00:03:03.106 SO libspdk_bdev.so.17.0 00:03:03.106 SYMLINK libspdk_bdev.so 00:03:03.365 CC lib/nvmf/ctrlr.o 00:03:03.365 CC lib/nvmf/ctrlr_discovery.o 00:03:03.365 CC lib/nvmf/ctrlr_bdev.o 00:03:03.365 CC lib/scsi/dev.o 00:03:03.365 CC lib/nvmf/subsystem.o 00:03:03.365 CC lib/ublk/ublk.o 00:03:03.365 CC lib/ftl/ftl_core.o 00:03:03.365 CC lib/nbd/nbd.o 00:03:03.624 LIB libspdk_blobfs.a 00:03:03.624 CC lib/scsi/lun.o 00:03:03.624 SO libspdk_blobfs.so.10.0 00:03:03.624 LIB libspdk_lvol.a 00:03:03.624 SYMLINK libspdk_blobfs.so 00:03:03.624 CC lib/scsi/port.o 00:03:03.885 SO libspdk_lvol.so.10.0 00:03:03.885 CC lib/ftl/ftl_init.o 00:03:03.885 SYMLINK libspdk_lvol.so 00:03:03.885 CC lib/scsi/scsi.o 00:03:03.885 CC lib/nbd/nbd_rpc.o 00:03:03.885 CC lib/nvmf/nvmf.o 00:03:03.885 CC lib/nvmf/nvmf_rpc.o 00:03:03.885 CC lib/nvmf/transport.o 00:03:03.885 CC lib/scsi/scsi_bdev.o 00:03:04.145 CC lib/ftl/ftl_layout.o 00:03:04.145 LIB libspdk_nbd.a 00:03:04.145 SO libspdk_nbd.so.7.0 00:03:04.145 CC lib/ublk/ublk_rpc.o 00:03:04.145 SYMLINK libspdk_nbd.so 00:03:04.145 CC lib/ftl/ftl_debug.o 00:03:04.145 CC lib/nvmf/tcp.o 00:03:04.404 LIB libspdk_ublk.a 00:03:04.404 SO libspdk_ublk.so.3.0 00:03:04.404 CC lib/ftl/ftl_io.o 00:03:04.404 CC lib/ftl/ftl_sb.o 00:03:04.404 SYMLINK libspdk_ublk.so 00:03:04.404 CC lib/ftl/ftl_l2p.o 00:03:04.663 CC lib/scsi/scsi_pr.o 00:03:04.663 CC lib/scsi/scsi_rpc.o 00:03:04.663 CC lib/ftl/ftl_l2p_flat.o 00:03:04.663 CC lib/scsi/task.o 00:03:04.663 CC lib/ftl/ftl_nv_cache.o 00:03:04.663 CC lib/nvmf/stubs.o 00:03:04.663 CC lib/nvmf/mdns_server.o 00:03:04.921 CC lib/nvmf/rdma.o 00:03:04.921 CC lib/nvmf/auth.o 00:03:04.921 CC lib/ftl/ftl_band.o 00:03:04.921 LIB libspdk_scsi.a 00:03:04.921 CC lib/ftl/ftl_band_ops.o 00:03:04.921 SO libspdk_scsi.so.9.0 00:03:05.182 SYMLINK libspdk_scsi.so 00:03:05.182 CC lib/ftl/ftl_writer.o 00:03:05.182 CC lib/ftl/ftl_rq.o 00:03:05.182 CC lib/ftl/ftl_reloc.o 00:03:05.182 CC lib/ftl/ftl_l2p_cache.o 00:03:05.442 CC lib/ftl/ftl_p2l.o 00:03:05.442 CC lib/ftl/ftl_p2l_log.o 00:03:05.442 CC lib/ftl/mngt/ftl_mngt.o 00:03:05.442 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:05.702 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:05.960 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:05.960 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:05.960 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:05.960 CC lib/ftl/utils/ftl_conf.o 00:03:05.960 CC lib/ftl/utils/ftl_md.o 00:03:05.960 CC lib/iscsi/conn.o 00:03:06.219 CC lib/iscsi/init_grp.o 00:03:06.219 CC lib/ftl/utils/ftl_mempool.o 00:03:06.219 CC lib/ftl/utils/ftl_bitmap.o 00:03:06.219 CC lib/ftl/utils/ftl_property.o 00:03:06.219 CC lib/vhost/vhost.o 00:03:06.219 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:06.219 CC lib/iscsi/iscsi.o 00:03:06.219 CC lib/iscsi/param.o 00:03:06.504 CC lib/vhost/vhost_rpc.o 00:03:06.504 CC lib/vhost/vhost_scsi.o 00:03:06.504 CC lib/vhost/vhost_blk.o 00:03:06.504 CC lib/iscsi/portal_grp.o 00:03:06.504 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:06.763 CC lib/iscsi/tgt_node.o 00:03:06.763 CC lib/iscsi/iscsi_subsystem.o 00:03:06.763 CC lib/iscsi/iscsi_rpc.o 00:03:06.763 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.022 CC lib/vhost/rte_vhost_user.o 00:03:07.022 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.022 CC lib/iscsi/task.o 00:03:07.282 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.282 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.282 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:07.282 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.282 LIB libspdk_nvmf.a 00:03:07.282 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.282 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.282 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.282 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:07.282 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:07.540 CC lib/ftl/base/ftl_base_dev.o 00:03:07.540 SO libspdk_nvmf.so.20.0 00:03:07.541 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.541 CC lib/ftl/ftl_trace.o 00:03:07.799 SYMLINK libspdk_nvmf.so 00:03:07.799 LIB libspdk_ftl.a 00:03:07.799 LIB libspdk_iscsi.a 00:03:08.059 SO libspdk_iscsi.so.8.0 00:03:08.059 LIB libspdk_vhost.a 00:03:08.059 SO libspdk_ftl.so.9.0 00:03:08.059 SO libspdk_vhost.so.8.0 00:03:08.318 SYMLINK libspdk_iscsi.so 00:03:08.318 SYMLINK libspdk_vhost.so 00:03:08.318 SYMLINK libspdk_ftl.so 00:03:08.888 CC module/env_dpdk/env_dpdk_rpc.o 00:03:08.888 CC module/blob/bdev/blob_bdev.o 00:03:08.888 CC module/keyring/file/keyring.o 00:03:08.888 CC module/keyring/linux/keyring.o 00:03:08.888 CC module/accel/error/accel_error.o 00:03:08.888 CC module/scheduler/gscheduler/gscheduler.o 00:03:08.888 CC module/sock/posix/posix.o 00:03:08.888 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:08.888 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:08.888 CC module/fsdev/aio/fsdev_aio.o 00:03:08.888 LIB libspdk_env_dpdk_rpc.a 00:03:08.888 SO libspdk_env_dpdk_rpc.so.6.0 00:03:09.147 SYMLINK libspdk_env_dpdk_rpc.so 00:03:09.147 CC module/accel/error/accel_error_rpc.o 00:03:09.147 CC module/keyring/linux/keyring_rpc.o 00:03:09.147 CC module/keyring/file/keyring_rpc.o 00:03:09.147 LIB libspdk_scheduler_dpdk_governor.a 00:03:09.147 LIB libspdk_scheduler_gscheduler.a 00:03:09.147 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:09.147 SO libspdk_scheduler_gscheduler.so.4.0 00:03:09.147 LIB libspdk_scheduler_dynamic.a 00:03:09.147 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:09.147 SO libspdk_scheduler_dynamic.so.4.0 00:03:09.147 SYMLINK libspdk_scheduler_gscheduler.so 00:03:09.147 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:09.147 LIB libspdk_keyring_linux.a 00:03:09.147 LIB libspdk_accel_error.a 00:03:09.147 LIB libspdk_blob_bdev.a 00:03:09.147 LIB libspdk_keyring_file.a 00:03:09.147 SYMLINK libspdk_scheduler_dynamic.so 00:03:09.147 SO libspdk_keyring_linux.so.1.0 00:03:09.147 SO libspdk_blob_bdev.so.11.0 00:03:09.147 SO libspdk_accel_error.so.2.0 00:03:09.407 SO libspdk_keyring_file.so.2.0 00:03:09.407 SYMLINK libspdk_keyring_linux.so 00:03:09.407 SYMLINK libspdk_blob_bdev.so 00:03:09.407 CC module/fsdev/aio/linux_aio_mgr.o 00:03:09.407 SYMLINK libspdk_accel_error.so 00:03:09.407 SYMLINK libspdk_keyring_file.so 00:03:09.407 CC module/accel/ioat/accel_ioat.o 00:03:09.407 CC module/accel/ioat/accel_ioat_rpc.o 00:03:09.407 CC module/accel/dsa/accel_dsa.o 00:03:09.407 CC module/accel/dsa/accel_dsa_rpc.o 00:03:09.407 CC module/accel/iaa/accel_iaa.o 00:03:09.666 CC module/accel/iaa/accel_iaa_rpc.o 00:03:09.666 LIB libspdk_accel_ioat.a 00:03:09.666 CC module/bdev/delay/vbdev_delay.o 00:03:09.666 CC module/blobfs/bdev/blobfs_bdev.o 00:03:09.666 SO libspdk_accel_ioat.so.6.0 00:03:09.666 LIB libspdk_accel_iaa.a 00:03:09.666 SYMLINK libspdk_accel_ioat.so 00:03:09.666 LIB libspdk_fsdev_aio.a 00:03:09.666 LIB libspdk_accel_dsa.a 00:03:09.666 SO libspdk_accel_iaa.so.3.0 00:03:09.666 CC module/bdev/error/vbdev_error.o 00:03:09.666 SO libspdk_fsdev_aio.so.1.0 00:03:09.666 LIB libspdk_sock_posix.a 00:03:09.666 SO libspdk_accel_dsa.so.5.0 00:03:09.925 CC module/bdev/gpt/gpt.o 00:03:09.925 SYMLINK libspdk_accel_iaa.so 00:03:09.925 SO libspdk_sock_posix.so.6.0 00:03:09.925 CC module/bdev/error/vbdev_error_rpc.o 00:03:09.925 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:09.925 SYMLINK libspdk_accel_dsa.so 00:03:09.925 CC module/bdev/gpt/vbdev_gpt.o 00:03:09.925 SYMLINK libspdk_fsdev_aio.so 00:03:09.925 CC module/bdev/lvol/vbdev_lvol.o 00:03:09.925 SYMLINK libspdk_sock_posix.so 00:03:09.925 CC module/bdev/malloc/bdev_malloc.o 00:03:09.925 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:09.925 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:09.925 LIB libspdk_blobfs_bdev.a 00:03:10.184 LIB libspdk_bdev_error.a 00:03:10.184 CC module/bdev/null/bdev_null.o 00:03:10.184 SO libspdk_blobfs_bdev.so.6.0 00:03:10.184 SO libspdk_bdev_error.so.6.0 00:03:10.184 CC module/bdev/nvme/bdev_nvme.o 00:03:10.184 SYMLINK libspdk_blobfs_bdev.so 00:03:10.184 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:10.184 SYMLINK libspdk_bdev_error.so 00:03:10.184 LIB libspdk_bdev_delay.a 00:03:10.184 LIB libspdk_bdev_gpt.a 00:03:10.184 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:10.184 CC module/bdev/passthru/vbdev_passthru.o 00:03:10.184 SO libspdk_bdev_delay.so.6.0 00:03:10.184 SO libspdk_bdev_gpt.so.6.0 00:03:10.184 SYMLINK libspdk_bdev_delay.so 00:03:10.442 SYMLINK libspdk_bdev_gpt.so 00:03:10.442 LIB libspdk_bdev_malloc.a 00:03:10.442 CC module/bdev/raid/bdev_raid.o 00:03:10.442 SO libspdk_bdev_malloc.so.6.0 00:03:10.442 CC module/bdev/null/bdev_null_rpc.o 00:03:10.442 SYMLINK libspdk_bdev_malloc.so 00:03:10.442 CC module/bdev/split/vbdev_split.o 00:03:10.442 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:10.442 CC module/bdev/split/vbdev_split_rpc.o 00:03:10.442 LIB libspdk_bdev_null.a 00:03:10.702 SO libspdk_bdev_null.so.6.0 00:03:10.702 LIB libspdk_bdev_lvol.a 00:03:10.702 CC module/bdev/xnvme/bdev_xnvme.o 00:03:10.702 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:10.702 SO libspdk_bdev_lvol.so.6.0 00:03:10.702 SYMLINK libspdk_bdev_null.so 00:03:10.702 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:10.702 CC module/bdev/nvme/nvme_rpc.o 00:03:10.702 LIB libspdk_bdev_split.a 00:03:10.702 SYMLINK libspdk_bdev_lvol.so 00:03:10.702 CC module/bdev/nvme/bdev_mdns_client.o 00:03:10.702 SO libspdk_bdev_split.so.6.0 00:03:10.702 LIB libspdk_bdev_passthru.a 00:03:10.702 SYMLINK libspdk_bdev_split.so 00:03:10.962 CC module/bdev/nvme/vbdev_opal.o 00:03:10.962 SO libspdk_bdev_passthru.so.6.0 00:03:10.962 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:10.962 LIB libspdk_bdev_zone_block.a 00:03:10.962 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:10.962 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:10.962 SO libspdk_bdev_zone_block.so.6.0 00:03:10.962 SYMLINK libspdk_bdev_passthru.so 00:03:10.962 CC module/bdev/raid/bdev_raid_rpc.o 00:03:10.962 SYMLINK libspdk_bdev_zone_block.so 00:03:10.962 LIB libspdk_bdev_xnvme.a 00:03:10.962 CC module/bdev/aio/bdev_aio.o 00:03:10.962 SO libspdk_bdev_xnvme.so.3.0 00:03:11.222 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.222 SYMLINK libspdk_bdev_xnvme.so 00:03:11.222 CC module/bdev/ftl/bdev_ftl.o 00:03:11.222 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.222 CC module/bdev/raid/raid0.o 00:03:11.222 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.222 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.222 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.222 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.481 CC module/bdev/raid/raid1.o 00:03:11.481 LIB libspdk_bdev_aio.a 00:03:11.481 CC module/bdev/raid/concat.o 00:03:11.481 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.481 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.481 SO libspdk_bdev_aio.so.6.0 00:03:11.481 LIB libspdk_bdev_ftl.a 00:03:11.481 SO libspdk_bdev_ftl.so.6.0 00:03:11.481 SYMLINK libspdk_bdev_aio.so 00:03:11.481 LIB libspdk_bdev_iscsi.a 00:03:11.481 SYMLINK libspdk_bdev_ftl.so 00:03:11.740 SO libspdk_bdev_iscsi.so.6.0 00:03:11.740 SYMLINK libspdk_bdev_iscsi.so 00:03:11.740 LIB libspdk_bdev_raid.a 00:03:11.740 SO libspdk_bdev_raid.so.6.0 00:03:11.999 LIB libspdk_bdev_virtio.a 00:03:11.999 SYMLINK libspdk_bdev_raid.so 00:03:11.999 SO libspdk_bdev_virtio.so.6.0 00:03:11.999 SYMLINK libspdk_bdev_virtio.so 00:03:14.610 LIB libspdk_bdev_nvme.a 00:03:14.610 SO libspdk_bdev_nvme.so.7.1 00:03:14.610 SYMLINK libspdk_bdev_nvme.so 00:03:15.178 CC module/event/subsystems/sock/sock.o 00:03:15.178 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:15.178 CC module/event/subsystems/scheduler/scheduler.o 00:03:15.178 CC module/event/subsystems/vmd/vmd.o 00:03:15.178 CC module/event/subsystems/iobuf/iobuf.o 00:03:15.178 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:15.178 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:15.178 CC module/event/subsystems/keyring/keyring.o 00:03:15.178 CC module/event/subsystems/fsdev/fsdev.o 00:03:15.178 LIB libspdk_event_vhost_blk.a 00:03:15.178 LIB libspdk_event_sock.a 00:03:15.178 LIB libspdk_event_scheduler.a 00:03:15.178 LIB libspdk_event_keyring.a 00:03:15.178 LIB libspdk_event_vmd.a 00:03:15.178 LIB libspdk_event_iobuf.a 00:03:15.178 LIB libspdk_event_fsdev.a 00:03:15.178 SO libspdk_event_vhost_blk.so.3.0 00:03:15.178 SO libspdk_event_sock.so.5.0 00:03:15.178 SO libspdk_event_scheduler.so.4.0 00:03:15.178 SO libspdk_event_keyring.so.1.0 00:03:15.178 SO libspdk_event_vmd.so.6.0 00:03:15.178 SO libspdk_event_iobuf.so.3.0 00:03:15.178 SO libspdk_event_fsdev.so.1.0 00:03:15.178 SYMLINK libspdk_event_vhost_blk.so 00:03:15.178 SYMLINK libspdk_event_sock.so 00:03:15.178 SYMLINK libspdk_event_scheduler.so 00:03:15.178 SYMLINK libspdk_event_keyring.so 00:03:15.178 SYMLINK libspdk_event_vmd.so 00:03:15.178 SYMLINK libspdk_event_fsdev.so 00:03:15.178 SYMLINK libspdk_event_iobuf.so 00:03:15.745 CC module/event/subsystems/accel/accel.o 00:03:16.005 LIB libspdk_event_accel.a 00:03:16.005 SO libspdk_event_accel.so.6.0 00:03:16.005 SYMLINK libspdk_event_accel.so 00:03:16.572 CC module/event/subsystems/bdev/bdev.o 00:03:16.572 LIB libspdk_event_bdev.a 00:03:16.572 SO libspdk_event_bdev.so.6.0 00:03:16.831 SYMLINK libspdk_event_bdev.so 00:03:17.090 CC module/event/subsystems/scsi/scsi.o 00:03:17.090 CC module/event/subsystems/nbd/nbd.o 00:03:17.090 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:17.090 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:17.090 CC module/event/subsystems/ublk/ublk.o 00:03:17.090 LIB libspdk_event_scsi.a 00:03:17.090 LIB libspdk_event_nbd.a 00:03:17.350 LIB libspdk_event_ublk.a 00:03:17.350 SO libspdk_event_nbd.so.6.0 00:03:17.350 SO libspdk_event_scsi.so.6.0 00:03:17.350 SO libspdk_event_ublk.so.3.0 00:03:17.350 SYMLINK libspdk_event_scsi.so 00:03:17.350 SYMLINK libspdk_event_nbd.so 00:03:17.350 LIB libspdk_event_nvmf.a 00:03:17.350 SYMLINK libspdk_event_ublk.so 00:03:17.350 SO libspdk_event_nvmf.so.6.0 00:03:17.350 SYMLINK libspdk_event_nvmf.so 00:03:17.610 CC module/event/subsystems/iscsi/iscsi.o 00:03:17.610 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:17.869 LIB libspdk_event_vhost_scsi.a 00:03:17.869 LIB libspdk_event_iscsi.a 00:03:17.869 SO libspdk_event_iscsi.so.6.0 00:03:17.869 SO libspdk_event_vhost_scsi.so.3.0 00:03:18.129 SYMLINK libspdk_event_iscsi.so 00:03:18.129 SYMLINK libspdk_event_vhost_scsi.so 00:03:18.129 SO libspdk.so.6.0 00:03:18.129 SYMLINK libspdk.so 00:03:18.698 CXX app/trace/trace.o 00:03:18.698 CC app/spdk_lspci/spdk_lspci.o 00:03:18.698 CC app/trace_record/trace_record.o 00:03:18.698 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.698 CC app/iscsi_tgt/iscsi_tgt.o 00:03:18.698 CC app/nvmf_tgt/nvmf_main.o 00:03:18.698 CC app/spdk_tgt/spdk_tgt.o 00:03:18.698 CC examples/ioat/perf/perf.o 00:03:18.698 CC examples/util/zipf/zipf.o 00:03:18.698 CC test/thread/poller_perf/poller_perf.o 00:03:18.698 LINK spdk_lspci 00:03:18.698 LINK nvmf_tgt 00:03:18.698 LINK interrupt_tgt 00:03:18.698 LINK poller_perf 00:03:18.698 LINK iscsi_tgt 00:03:18.698 LINK spdk_tgt 00:03:18.698 LINK zipf 00:03:18.698 LINK spdk_trace_record 00:03:18.957 LINK ioat_perf 00:03:18.957 CC app/spdk_nvme_perf/perf.o 00:03:18.957 LINK spdk_trace 00:03:19.216 CC examples/ioat/verify/verify.o 00:03:19.216 CC app/spdk_nvme_identify/identify.o 00:03:19.216 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.216 CC app/spdk_top/spdk_top.o 00:03:19.216 CC examples/sock/hello_world/hello_sock.o 00:03:19.216 CC examples/thread/thread/thread_ex.o 00:03:19.216 CC test/dma/test_dma/test_dma.o 00:03:19.216 CC test/app/bdev_svc/bdev_svc.o 00:03:19.216 LINK verify 00:03:19.216 LINK spdk_nvme_discover 00:03:19.216 CC app/spdk_dd/spdk_dd.o 00:03:19.476 LINK bdev_svc 00:03:19.476 LINK hello_sock 00:03:19.476 LINK thread 00:03:19.476 TEST_HEADER include/spdk/accel.h 00:03:19.736 TEST_HEADER include/spdk/accel_module.h 00:03:19.736 TEST_HEADER include/spdk/assert.h 00:03:19.736 TEST_HEADER include/spdk/barrier.h 00:03:19.736 TEST_HEADER include/spdk/base64.h 00:03:19.736 TEST_HEADER include/spdk/bdev.h 00:03:19.736 TEST_HEADER include/spdk/bdev_module.h 00:03:19.736 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.736 TEST_HEADER include/spdk/bit_array.h 00:03:19.736 TEST_HEADER include/spdk/bit_pool.h 00:03:19.736 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.736 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.736 TEST_HEADER include/spdk/blobfs.h 00:03:19.736 TEST_HEADER include/spdk/blob.h 00:03:19.736 TEST_HEADER include/spdk/conf.h 00:03:19.736 TEST_HEADER include/spdk/config.h 00:03:19.736 TEST_HEADER include/spdk/cpuset.h 00:03:19.736 TEST_HEADER include/spdk/crc16.h 00:03:19.736 TEST_HEADER include/spdk/crc32.h 00:03:19.736 TEST_HEADER include/spdk/crc64.h 00:03:19.736 TEST_HEADER include/spdk/dif.h 00:03:19.736 TEST_HEADER include/spdk/dma.h 00:03:19.736 TEST_HEADER include/spdk/endian.h 00:03:19.736 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.736 TEST_HEADER include/spdk/env.h 00:03:19.736 TEST_HEADER include/spdk/event.h 00:03:19.736 TEST_HEADER include/spdk/fd_group.h 00:03:19.736 TEST_HEADER include/spdk/fd.h 00:03:19.736 TEST_HEADER include/spdk/file.h 00:03:19.736 TEST_HEADER include/spdk/fsdev.h 00:03:19.736 TEST_HEADER include/spdk/fsdev_module.h 00:03:19.736 TEST_HEADER include/spdk/ftl.h 00:03:19.736 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:19.736 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.736 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:19.736 TEST_HEADER include/spdk/hexlify.h 00:03:19.736 TEST_HEADER include/spdk/histogram_data.h 00:03:19.736 TEST_HEADER include/spdk/idxd.h 00:03:19.736 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.736 TEST_HEADER include/spdk/init.h 00:03:19.736 TEST_HEADER include/spdk/ioat.h 00:03:19.736 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.736 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.736 TEST_HEADER include/spdk/json.h 00:03:19.736 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.736 TEST_HEADER include/spdk/keyring.h 00:03:19.736 TEST_HEADER include/spdk/keyring_module.h 00:03:19.736 TEST_HEADER include/spdk/likely.h 00:03:19.736 TEST_HEADER include/spdk/log.h 00:03:19.736 TEST_HEADER include/spdk/lvol.h 00:03:19.736 TEST_HEADER include/spdk/md5.h 00:03:19.736 TEST_HEADER include/spdk/memory.h 00:03:19.736 TEST_HEADER include/spdk/mmio.h 00:03:19.736 TEST_HEADER include/spdk/nbd.h 00:03:19.736 TEST_HEADER include/spdk/net.h 00:03:19.736 TEST_HEADER include/spdk/notify.h 00:03:19.736 TEST_HEADER include/spdk/nvme.h 00:03:19.736 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.736 LINK spdk_dd 00:03:19.736 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.736 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.736 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.736 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.736 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.736 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.736 TEST_HEADER include/spdk/nvmf.h 00:03:19.736 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.736 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.736 TEST_HEADER include/spdk/opal.h 00:03:19.736 TEST_HEADER include/spdk/opal_spec.h 00:03:19.736 TEST_HEADER include/spdk/pci_ids.h 00:03:19.736 TEST_HEADER include/spdk/pipe.h 00:03:19.736 LINK test_dma 00:03:19.736 TEST_HEADER include/spdk/queue.h 00:03:19.736 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:19.736 TEST_HEADER include/spdk/reduce.h 00:03:19.736 TEST_HEADER include/spdk/rpc.h 00:03:19.736 TEST_HEADER include/spdk/scheduler.h 00:03:19.736 TEST_HEADER include/spdk/scsi.h 00:03:19.736 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.736 TEST_HEADER include/spdk/sock.h 00:03:19.736 TEST_HEADER include/spdk/stdinc.h 00:03:19.736 TEST_HEADER include/spdk/string.h 00:03:19.736 TEST_HEADER include/spdk/thread.h 00:03:19.736 TEST_HEADER include/spdk/trace.h 00:03:19.736 TEST_HEADER include/spdk/trace_parser.h 00:03:19.736 TEST_HEADER include/spdk/tree.h 00:03:19.736 TEST_HEADER include/spdk/ublk.h 00:03:19.736 TEST_HEADER include/spdk/util.h 00:03:19.736 TEST_HEADER include/spdk/uuid.h 00:03:19.736 TEST_HEADER include/spdk/version.h 00:03:19.736 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.736 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.736 TEST_HEADER include/spdk/vhost.h 00:03:19.736 TEST_HEADER include/spdk/vmd.h 00:03:19.736 TEST_HEADER include/spdk/xor.h 00:03:19.736 TEST_HEADER include/spdk/zipf.h 00:03:19.736 CXX test/cpp_headers/accel.o 00:03:19.996 CC app/fio/nvme/fio_plugin.o 00:03:19.996 LINK spdk_nvme_perf 00:03:19.996 CC examples/vmd/lsvmd/lsvmd.o 00:03:19.996 CXX test/cpp_headers/accel_module.o 00:03:19.996 CC test/app/histogram_perf/histogram_perf.o 00:03:19.996 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.255 LINK lsvmd 00:03:20.255 LINK spdk_nvme_identify 00:03:20.255 LINK nvme_fuzz 00:03:20.255 LINK spdk_top 00:03:20.255 CXX test/cpp_headers/assert.o 00:03:20.255 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.255 LINK histogram_perf 00:03:20.515 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.515 CC examples/vmd/led/led.o 00:03:20.515 CXX test/cpp_headers/barrier.o 00:03:20.515 CC test/env/vtophys/vtophys.o 00:03:20.515 CC test/app/jsoncat/jsoncat.o 00:03:20.515 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.515 CC test/event/event_perf/event_perf.o 00:03:20.515 LINK spdk_nvme 00:03:20.515 CXX test/cpp_headers/base64.o 00:03:20.515 LINK led 00:03:20.515 LINK vtophys 00:03:20.515 LINK jsoncat 00:03:20.774 LINK event_perf 00:03:20.774 LINK env_dpdk_post_init 00:03:20.774 LINK vhost_fuzz 00:03:20.774 CXX test/cpp_headers/bdev.o 00:03:20.774 CC app/fio/bdev/fio_plugin.o 00:03:20.774 CC test/event/reactor/reactor.o 00:03:20.774 CXX test/cpp_headers/bdev_module.o 00:03:20.774 LINK mem_callbacks 00:03:21.039 CC test/event/reactor_perf/reactor_perf.o 00:03:21.039 CC examples/idxd/perf/perf.o 00:03:21.039 CC test/env/memory/memory_ut.o 00:03:21.039 CC examples/accel/perf/accel_perf.o 00:03:21.039 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:21.039 LINK reactor 00:03:21.039 LINK reactor_perf 00:03:21.039 CXX test/cpp_headers/bdev_zone.o 00:03:21.334 CC test/nvme/aer/aer.o 00:03:21.334 CC test/nvme/reset/reset.o 00:03:21.334 LINK idxd_perf 00:03:21.334 LINK hello_fsdev 00:03:21.334 CXX test/cpp_headers/bit_array.o 00:03:21.334 CC test/event/app_repeat/app_repeat.o 00:03:21.334 LINK spdk_bdev 00:03:21.334 CXX test/cpp_headers/bit_pool.o 00:03:21.604 LINK app_repeat 00:03:21.604 LINK reset 00:03:21.604 LINK accel_perf 00:03:21.604 CC test/env/pci/pci_ut.o 00:03:21.604 LINK aer 00:03:21.604 CC app/vhost/vhost.o 00:03:21.604 CC test/nvme/sgl/sgl.o 00:03:21.604 CXX test/cpp_headers/blob_bdev.o 00:03:21.604 LINK iscsi_fuzz 00:03:21.604 CC test/event/scheduler/scheduler.o 00:03:21.863 LINK vhost 00:03:21.863 CC test/app/stub/stub.o 00:03:21.863 CC test/nvme/e2edp/nvme_dp.o 00:03:21.863 CXX test/cpp_headers/blobfs_bdev.o 00:03:21.863 LINK sgl 00:03:21.863 CC examples/blob/hello_world/hello_blob.o 00:03:21.863 LINK pci_ut 00:03:21.863 LINK stub 00:03:21.863 LINK scheduler 00:03:21.863 CXX test/cpp_headers/blobfs.o 00:03:22.128 LINK nvme_dp 00:03:22.128 LINK hello_blob 00:03:22.128 CC examples/nvme/hello_world/hello_world.o 00:03:22.128 CXX test/cpp_headers/blob.o 00:03:22.128 LINK memory_ut 00:03:22.128 CC examples/blob/cli/blobcli.o 00:03:22.128 CXX test/cpp_headers/conf.o 00:03:22.128 CC examples/bdev/hello_world/hello_bdev.o 00:03:22.389 CC examples/bdev/bdevperf/bdevperf.o 00:03:22.389 CC test/rpc_client/rpc_client_test.o 00:03:22.389 CC test/nvme/overhead/overhead.o 00:03:22.389 CXX test/cpp_headers/config.o 00:03:22.389 LINK hello_world 00:03:22.389 CXX test/cpp_headers/cpuset.o 00:03:22.389 LINK hello_bdev 00:03:22.389 CC test/nvme/err_injection/err_injection.o 00:03:22.389 LINK rpc_client_test 00:03:22.389 CC test/accel/dif/dif.o 00:03:22.650 CC test/blobfs/mkfs/mkfs.o 00:03:22.650 CXX test/cpp_headers/crc16.o 00:03:22.650 CC examples/nvme/reconnect/reconnect.o 00:03:22.650 CXX test/cpp_headers/crc32.o 00:03:22.650 LINK err_injection 00:03:22.650 CXX test/cpp_headers/crc64.o 00:03:22.650 LINK overhead 00:03:22.650 LINK blobcli 00:03:22.650 LINK mkfs 00:03:22.908 CXX test/cpp_headers/dif.o 00:03:22.908 CC test/nvme/startup/startup.o 00:03:22.908 CC test/nvme/reserve/reserve.o 00:03:22.909 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:22.909 CC examples/nvme/arbitration/arbitration.o 00:03:22.909 CXX test/cpp_headers/dma.o 00:03:22.909 LINK reconnect 00:03:22.909 CC examples/nvme/hotplug/hotplug.o 00:03:22.909 LINK startup 00:03:23.168 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:23.168 LINK reserve 00:03:23.168 LINK bdevperf 00:03:23.168 CXX test/cpp_headers/endian.o 00:03:23.168 CXX test/cpp_headers/env_dpdk.o 00:03:23.168 LINK cmb_copy 00:03:23.168 LINK hotplug 00:03:23.168 CC test/nvme/simple_copy/simple_copy.o 00:03:23.168 LINK dif 00:03:23.168 LINK arbitration 00:03:23.428 CXX test/cpp_headers/env.o 00:03:23.428 CC examples/nvme/abort/abort.o 00:03:23.428 CXX test/cpp_headers/event.o 00:03:23.428 CXX test/cpp_headers/fd_group.o 00:03:23.428 CXX test/cpp_headers/fd.o 00:03:23.428 CXX test/cpp_headers/file.o 00:03:23.428 LINK nvme_manage 00:03:23.428 CXX test/cpp_headers/fsdev.o 00:03:23.428 LINK simple_copy 00:03:23.687 CXX test/cpp_headers/fsdev_module.o 00:03:23.687 CC test/lvol/esnap/esnap.o 00:03:23.687 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:23.687 CXX test/cpp_headers/ftl.o 00:03:23.687 CXX test/cpp_headers/fuse_dispatcher.o 00:03:23.687 CXX test/cpp_headers/gpt_spec.o 00:03:23.687 CC test/bdev/bdevio/bdevio.o 00:03:23.687 CC test/nvme/connect_stress/connect_stress.o 00:03:23.687 LINK pmr_persistence 00:03:23.687 LINK abort 00:03:23.687 CC test/nvme/boot_partition/boot_partition.o 00:03:23.687 CXX test/cpp_headers/hexlify.o 00:03:23.947 CC test/nvme/compliance/nvme_compliance.o 00:03:23.947 CC test/nvme/fused_ordering/fused_ordering.o 00:03:23.947 LINK connect_stress 00:03:23.947 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:23.947 LINK boot_partition 00:03:23.947 CXX test/cpp_headers/histogram_data.o 00:03:23.947 CC test/nvme/fdp/fdp.o 00:03:23.947 CXX test/cpp_headers/idxd.o 00:03:24.206 LINK fused_ordering 00:03:24.207 LINK doorbell_aers 00:03:24.207 LINK bdevio 00:03:24.207 CXX test/cpp_headers/idxd_spec.o 00:03:24.207 CC examples/nvmf/nvmf/nvmf.o 00:03:24.207 LINK nvme_compliance 00:03:24.207 CC test/nvme/cuse/cuse.o 00:03:24.207 CXX test/cpp_headers/init.o 00:03:24.207 CXX test/cpp_headers/ioat.o 00:03:24.207 CXX test/cpp_headers/ioat_spec.o 00:03:24.207 CXX test/cpp_headers/iscsi_spec.o 00:03:24.466 CXX test/cpp_headers/json.o 00:03:24.466 CXX test/cpp_headers/jsonrpc.o 00:03:24.466 LINK fdp 00:03:24.466 CXX test/cpp_headers/keyring.o 00:03:24.466 CXX test/cpp_headers/keyring_module.o 00:03:24.466 CXX test/cpp_headers/likely.o 00:03:24.466 CXX test/cpp_headers/log.o 00:03:24.466 LINK nvmf 00:03:24.466 CXX test/cpp_headers/lvol.o 00:03:24.466 CXX test/cpp_headers/md5.o 00:03:24.466 CXX test/cpp_headers/memory.o 00:03:24.725 CXX test/cpp_headers/mmio.o 00:03:24.725 CXX test/cpp_headers/nbd.o 00:03:24.725 CXX test/cpp_headers/net.o 00:03:24.725 CXX test/cpp_headers/notify.o 00:03:24.725 CXX test/cpp_headers/nvme.o 00:03:24.725 CXX test/cpp_headers/nvme_intel.o 00:03:24.725 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.725 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.725 CXX test/cpp_headers/nvme_spec.o 00:03:24.725 CXX test/cpp_headers/nvme_zns.o 00:03:24.725 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.725 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.725 CXX test/cpp_headers/nvmf.o 00:03:24.725 CXX test/cpp_headers/nvmf_spec.o 00:03:24.984 CXX test/cpp_headers/nvmf_transport.o 00:03:24.984 CXX test/cpp_headers/opal.o 00:03:24.984 CXX test/cpp_headers/opal_spec.o 00:03:24.984 CXX test/cpp_headers/pci_ids.o 00:03:24.984 CXX test/cpp_headers/pipe.o 00:03:24.984 CXX test/cpp_headers/queue.o 00:03:24.984 CXX test/cpp_headers/reduce.o 00:03:24.984 CXX test/cpp_headers/rpc.o 00:03:24.984 CXX test/cpp_headers/scheduler.o 00:03:24.984 CXX test/cpp_headers/scsi.o 00:03:24.984 CXX test/cpp_headers/scsi_spec.o 00:03:24.984 CXX test/cpp_headers/sock.o 00:03:25.243 CXX test/cpp_headers/stdinc.o 00:03:25.243 CXX test/cpp_headers/string.o 00:03:25.243 CXX test/cpp_headers/thread.o 00:03:25.243 CXX test/cpp_headers/trace.o 00:03:25.243 CXX test/cpp_headers/trace_parser.o 00:03:25.243 CXX test/cpp_headers/tree.o 00:03:25.243 CXX test/cpp_headers/ublk.o 00:03:25.243 CXX test/cpp_headers/util.o 00:03:25.243 CXX test/cpp_headers/uuid.o 00:03:25.243 CXX test/cpp_headers/version.o 00:03:25.243 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.244 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.503 CXX test/cpp_headers/vhost.o 00:03:25.503 CXX test/cpp_headers/vmd.o 00:03:25.503 CXX test/cpp_headers/xor.o 00:03:25.503 CXX test/cpp_headers/zipf.o 00:03:25.503 LINK cuse 00:03:29.699 LINK esnap 00:03:30.268 00:03:30.268 real 1m24.631s 00:03:30.268 user 7m14.953s 00:03:30.268 sys 1m51.910s 00:03:30.268 ************************************ 00:03:30.268 END TEST make 00:03:30.268 ************************************ 00:03:30.268 17:37:57 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:30.268 17:37:57 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.268 17:37:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.268 17:37:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.268 17:37:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.268 17:37:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.268 17:37:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.268 17:37:57 -- pm/common@44 -- $ pid=5297 00:03:30.268 17:37:57 -- pm/common@50 -- $ kill -TERM 5297 00:03:30.268 17:37:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.268 17:37:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.268 17:37:57 -- pm/common@44 -- $ pid=5299 00:03:30.268 17:37:57 -- pm/common@50 -- $ kill -TERM 5299 00:03:30.268 17:37:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:30.268 17:37:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:30.268 17:37:57 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:30.268 17:37:57 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:30.268 17:37:57 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:30.527 17:37:57 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:30.527 17:37:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:30.527 17:37:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:30.527 17:37:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:30.527 17:37:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:30.527 17:37:57 -- scripts/common.sh@336 -- # read -ra ver1 00:03:30.527 17:37:57 -- scripts/common.sh@337 -- # IFS=.-: 00:03:30.527 17:37:57 -- scripts/common.sh@337 -- # read -ra ver2 00:03:30.527 17:37:57 -- scripts/common.sh@338 -- # local 'op=<' 00:03:30.527 17:37:57 -- scripts/common.sh@340 -- # ver1_l=2 00:03:30.527 17:37:57 -- scripts/common.sh@341 -- # ver2_l=1 00:03:30.527 17:37:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:30.527 17:37:57 -- scripts/common.sh@344 -- # case "$op" in 00:03:30.527 17:37:57 -- scripts/common.sh@345 -- # : 1 00:03:30.527 17:37:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:30.527 17:37:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:30.527 17:37:57 -- scripts/common.sh@365 -- # decimal 1 00:03:30.527 17:37:57 -- scripts/common.sh@353 -- # local d=1 00:03:30.527 17:37:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:30.527 17:37:57 -- scripts/common.sh@355 -- # echo 1 00:03:30.527 17:37:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:30.527 17:37:57 -- scripts/common.sh@366 -- # decimal 2 00:03:30.527 17:37:57 -- scripts/common.sh@353 -- # local d=2 00:03:30.527 17:37:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:30.527 17:37:57 -- scripts/common.sh@355 -- # echo 2 00:03:30.527 17:37:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:30.527 17:37:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:30.527 17:37:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:30.527 17:37:57 -- scripts/common.sh@368 -- # return 0 00:03:30.527 17:37:57 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:30.527 17:37:57 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.527 --rc genhtml_branch_coverage=1 00:03:30.527 --rc genhtml_function_coverage=1 00:03:30.527 --rc genhtml_legend=1 00:03:30.527 --rc geninfo_all_blocks=1 00:03:30.527 --rc geninfo_unexecuted_blocks=1 00:03:30.527 00:03:30.527 ' 00:03:30.527 17:37:57 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.527 --rc genhtml_branch_coverage=1 00:03:30.527 --rc genhtml_function_coverage=1 00:03:30.527 --rc genhtml_legend=1 00:03:30.527 --rc geninfo_all_blocks=1 00:03:30.527 --rc geninfo_unexecuted_blocks=1 00:03:30.527 00:03:30.527 ' 00:03:30.527 17:37:57 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.527 --rc genhtml_branch_coverage=1 00:03:30.527 --rc genhtml_function_coverage=1 00:03:30.527 --rc genhtml_legend=1 00:03:30.527 --rc geninfo_all_blocks=1 00:03:30.527 --rc geninfo_unexecuted_blocks=1 00:03:30.527 00:03:30.527 ' 00:03:30.527 17:37:57 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:30.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:30.527 --rc genhtml_branch_coverage=1 00:03:30.527 --rc genhtml_function_coverage=1 00:03:30.527 --rc genhtml_legend=1 00:03:30.527 --rc geninfo_all_blocks=1 00:03:30.527 --rc geninfo_unexecuted_blocks=1 00:03:30.527 00:03:30.527 ' 00:03:30.527 17:37:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:30.527 17:37:57 -- nvmf/common.sh@7 -- # uname -s 00:03:30.527 17:37:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:30.527 17:37:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:30.527 17:37:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:30.527 17:37:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:30.527 17:37:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:30.527 17:37:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:30.527 17:37:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:30.527 17:37:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:30.527 17:37:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:30.527 17:37:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:30.527 17:37:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7142c3e-fe33-4b72-b423-d576a444e09d 00:03:30.527 17:37:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=f7142c3e-fe33-4b72-b423-d576a444e09d 00:03:30.527 17:37:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:30.527 17:37:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:30.527 17:37:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:30.527 17:37:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:30.527 17:37:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:30.527 17:37:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:30.527 17:37:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:30.527 17:37:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:30.527 17:37:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:30.527 17:37:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.527 17:37:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.527 17:37:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.527 17:37:57 -- paths/export.sh@5 -- # export PATH 00:03:30.527 17:37:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:30.527 17:37:57 -- nvmf/common.sh@51 -- # : 0 00:03:30.527 17:37:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:30.527 17:37:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:30.527 17:37:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:30.527 17:37:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:30.527 17:37:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:30.527 17:37:57 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:30.527 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:30.527 17:37:57 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:30.527 17:37:57 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:30.527 17:37:57 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:30.527 17:37:57 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:30.527 17:37:57 -- spdk/autotest.sh@32 -- # uname -s 00:03:30.527 17:37:57 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:30.527 17:37:57 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:30.527 17:37:57 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:30.527 17:37:57 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:30.527 17:37:57 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:30.527 17:37:57 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:30.527 17:37:57 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:30.527 17:37:57 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:30.527 17:37:57 -- spdk/autotest.sh@48 -- # udevadm_pid=54783 00:03:30.528 17:37:57 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:30.528 17:37:57 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:30.528 17:37:57 -- pm/common@17 -- # local monitor 00:03:30.528 17:37:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.528 17:37:57 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.528 17:37:57 -- pm/common@21 -- # date +%s 00:03:30.528 17:37:57 -- pm/common@25 -- # sleep 1 00:03:30.528 17:37:57 -- pm/common@21 -- # date +%s 00:03:30.528 17:37:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124277 00:03:30.528 17:37:57 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732124277 00:03:30.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124277_collect-vmstat.pm.log 00:03:30.528 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732124277_collect-cpu-load.pm.log 00:03:31.904 17:37:58 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.904 17:37:58 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.904 17:37:58 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:31.904 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:03:31.904 17:37:58 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.904 17:37:58 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:31.904 17:37:58 -- common/autotest_common.sh@10 -- # set +x 00:03:31.904 17:37:58 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:31.904 17:37:58 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:31.904 17:37:58 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:31.904 17:37:58 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:31.904 17:37:58 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:31.904 17:37:58 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.904 17:37:58 -- common/autotest_common.sh@1457 -- # uname 00:03:31.904 17:37:58 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:31.904 17:37:58 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.904 17:37:58 -- common/autotest_common.sh@1477 -- # uname 00:03:31.904 17:37:58 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:31.904 17:37:58 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:31.904 17:37:58 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:31.904 lcov: LCOV version 1.15 00:03:31.904 17:37:58 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:46.796 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:46.796 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:04.880 17:38:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:04.880 17:38:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:04.880 17:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:04.880 17:38:28 -- spdk/autotest.sh@78 -- # rm -f 00:04:04.880 17:38:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:04.880 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:04.880 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:04.880 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:04.880 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:04.880 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:04.880 17:38:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:04.880 17:38:30 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:04.880 17:38:30 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:04.880 17:38:30 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:04:04.880 17:38:30 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:04:04.880 17:38:30 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:04.880 17:38:30 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018336 s, 57.2 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00529748 s, 198 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0042759 s, 245 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00560293 s, 187 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436297 s, 240 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:04.880 17:38:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:04.880 17:38:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:04.880 17:38:30 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:04.880 17:38:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:04.880 No valid GPT data, bailing 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:04.880 17:38:30 -- scripts/common.sh@394 -- # pt= 00:04:04.880 17:38:30 -- scripts/common.sh@395 -- # return 1 00:04:04.880 17:38:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:04.880 1+0 records in 00:04:04.880 1+0 records out 00:04:04.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00579294 s, 181 MB/s 00:04:04.880 17:38:30 -- spdk/autotest.sh@105 -- # sync 00:04:04.880 17:38:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:04.880 17:38:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:04.880 17:38:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.799 17:38:33 -- spdk/autotest.sh@111 -- # uname -s 00:04:06.799 17:38:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:06.799 17:38:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:06.799 17:38:33 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:07.736 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:07.995 Hugepages 00:04:07.995 node hugesize free / total 00:04:07.995 node0 1048576kB 0 / 0 00:04:07.995 node0 2048kB 0 / 0 00:04:07.995 00:04:07.995 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.253 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:08.253 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:08.512 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:08.512 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:08.772 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:08.772 17:38:35 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.772 17:38:35 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.772 17:38:35 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.772 17:38:35 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:09.339 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:10.280 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.280 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.280 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.280 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:10.280 17:38:37 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:11.297 17:38:38 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:11.297 17:38:38 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:11.297 17:38:38 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:11.297 17:38:38 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:11.297 17:38:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:11.297 17:38:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:11.297 17:38:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:11.297 17:38:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:11.297 17:38:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:11.557 17:38:38 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:11.557 17:38:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:11.557 17:38:38 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:12.124 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:12.124 Waiting for block devices as requested 00:04:12.383 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.383 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.640 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:12.640 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:17.914 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:17.914 17:38:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.914 17:38:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.914 17:38:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.914 17:38:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1543 -- # continue 00:04:17.914 17:38:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.914 17:38:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.914 17:38:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.914 17:38:44 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1543 -- # continue 00:04:17.914 17:38:44 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:17.914 17:38:44 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.914 17:38:44 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.914 17:38:44 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:17.914 17:38:44 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.915 17:38:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.915 17:38:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.915 17:38:45 -- common/autotest_common.sh@1543 -- # continue 00:04:17.915 17:38:45 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:17.915 17:38:45 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:17.915 17:38:45 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:17.915 17:38:45 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:17.915 17:38:45 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:17.915 17:38:45 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:17.915 17:38:45 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:17.915 17:38:45 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:17.915 17:38:45 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:17.915 17:38:45 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:17.915 17:38:45 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:17.915 17:38:45 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:17.915 17:38:45 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:17.915 17:38:45 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:17.915 17:38:45 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:17.915 17:38:45 -- common/autotest_common.sh@1543 -- # continue 00:04:17.915 17:38:45 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:17.915 17:38:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:17.915 17:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.174 17:38:45 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.174 17:38:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.174 17:38:45 -- common/autotest_common.sh@10 -- # set +x 00:04:18.174 17:38:45 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:18.742 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.681 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.681 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.681 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.681 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:19.681 17:38:46 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:19.681 17:38:46 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:19.681 17:38:46 -- common/autotest_common.sh@10 -- # set +x 00:04:19.941 17:38:46 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:19.941 17:38:46 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:19.941 17:38:46 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:19.941 17:38:46 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:19.941 17:38:46 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:19.941 17:38:46 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:19.941 17:38:46 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:19.941 17:38:46 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:19.941 17:38:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:19.941 17:38:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:19.941 17:38:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:19.941 17:38:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:19.941 17:38:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:19.941 17:38:46 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:19.941 17:38:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:19.941 17:38:46 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:19.941 17:38:46 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:19.941 17:38:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.941 17:38:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:19.941 17:38:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.941 17:38:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:19.941 17:38:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.941 17:38:47 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:19.941 17:38:47 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:19.941 17:38:47 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:19.941 17:38:47 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:19.941 17:38:47 -- common/autotest_common.sh@1572 -- # return 0 00:04:19.941 17:38:47 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:19.941 17:38:47 -- common/autotest_common.sh@1580 -- # return 0 00:04:19.941 17:38:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:19.941 17:38:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:19.941 17:38:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.941 17:38:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:19.941 17:38:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:19.941 17:38:47 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.941 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:19.941 17:38:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:19.941 17:38:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:19.941 17:38:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:19.941 17:38:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:19.941 17:38:47 -- common/autotest_common.sh@10 -- # set +x 00:04:19.941 ************************************ 00:04:19.941 START TEST env 00:04:19.941 ************************************ 00:04:19.941 17:38:47 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:20.201 * Looking for test storage... 00:04:20.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:20.201 17:38:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:20.201 17:38:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:20.201 17:38:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:20.201 17:38:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.201 17:38:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:20.201 17:38:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:20.201 17:38:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:20.201 17:38:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:20.201 17:38:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:20.201 17:38:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:20.201 17:38:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:20.201 17:38:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:20.201 17:38:47 env -- scripts/common.sh@345 -- # : 1 00:04:20.201 17:38:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:20.201 17:38:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.201 17:38:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:20.201 17:38:47 env -- scripts/common.sh@353 -- # local d=1 00:04:20.201 17:38:47 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.201 17:38:47 env -- scripts/common.sh@355 -- # echo 1 00:04:20.201 17:38:47 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:20.201 17:38:47 env -- scripts/common.sh@366 -- # decimal 2 00:04:20.201 17:38:47 env -- scripts/common.sh@353 -- # local d=2 00:04:20.201 17:38:47 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.201 17:38:47 env -- scripts/common.sh@355 -- # echo 2 00:04:20.201 17:38:47 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:20.201 17:38:47 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:20.201 17:38:47 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:20.201 17:38:47 env -- scripts/common.sh@368 -- # return 0 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:20.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.201 --rc genhtml_branch_coverage=1 00:04:20.201 --rc genhtml_function_coverage=1 00:04:20.201 --rc genhtml_legend=1 00:04:20.201 --rc geninfo_all_blocks=1 00:04:20.201 --rc geninfo_unexecuted_blocks=1 00:04:20.201 00:04:20.201 ' 00:04:20.201 17:38:47 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.201 17:38:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.201 17:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.201 ************************************ 00:04:20.201 START TEST env_memory 00:04:20.201 ************************************ 00:04:20.201 17:38:47 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:20.201 00:04:20.201 00:04:20.201 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.201 http://cunit.sourceforge.net/ 00:04:20.201 00:04:20.201 00:04:20.201 Suite: memory 00:04:20.201 Test: alloc and free memory map ...[2024-11-20 17:38:47.369464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:20.461 passed 00:04:20.461 Test: mem map translation ...[2024-11-20 17:38:47.445434] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:20.461 [2024-11-20 17:38:47.445544] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:20.461 [2024-11-20 17:38:47.445627] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:20.461 [2024-11-20 17:38:47.445662] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:20.461 passed 00:04:20.461 Test: mem map registration ...[2024-11-20 17:38:47.520328] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:20.461 [2024-11-20 17:38:47.520394] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:20.461 passed 00:04:20.461 Test: mem map adjacent registrations ...passed 00:04:20.461 00:04:20.461 Run Summary: Type Total Ran Passed Failed Inactive 00:04:20.461 suites 1 1 n/a 0 0 00:04:20.461 tests 4 4 4 0 0 00:04:20.461 asserts 152 152 152 0 n/a 00:04:20.461 00:04:20.461 Elapsed time = 0.306 seconds 00:04:20.720 00:04:20.720 real 0m0.360s 00:04:20.720 user 0m0.313s 00:04:20.720 sys 0m0.036s 00:04:20.720 17:38:47 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:20.720 17:38:47 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:20.720 ************************************ 00:04:20.720 END TEST env_memory 00:04:20.720 ************************************ 00:04:20.720 17:38:47 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.720 17:38:47 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:20.720 17:38:47 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:20.720 17:38:47 env -- common/autotest_common.sh@10 -- # set +x 00:04:20.720 ************************************ 00:04:20.720 START TEST env_vtophys 00:04:20.720 ************************************ 00:04:20.720 17:38:47 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:20.720 EAL: lib.eal log level changed from notice to debug 00:04:20.720 EAL: Detected lcore 0 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 1 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 2 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 3 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 4 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 5 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 6 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 7 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 8 as core 0 on socket 0 00:04:20.720 EAL: Detected lcore 9 as core 0 on socket 0 00:04:20.720 EAL: Maximum logical cores by configuration: 128 00:04:20.720 EAL: Detected CPU lcores: 10 00:04:20.720 EAL: Detected NUMA nodes: 1 00:04:20.720 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:20.720 EAL: Detected shared linkage of DPDK 00:04:20.720 EAL: No shared files mode enabled, IPC will be disabled 00:04:20.720 EAL: Selected IOVA mode 'PA' 00:04:20.720 EAL: Probing VFIO support... 00:04:20.720 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.720 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:20.720 EAL: Ask a virtual area of 0x2e000 bytes 00:04:20.720 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:20.720 EAL: Setting up physically contiguous memory... 00:04:20.720 EAL: Setting maximum number of open files to 524288 00:04:20.720 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:20.720 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:20.720 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.720 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:20.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.720 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.720 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:20.720 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:20.720 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.720 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:20.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.720 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.720 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:20.720 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:20.720 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.720 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:20.720 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.720 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.720 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:20.720 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:20.720 EAL: Ask a virtual area of 0x61000 bytes 00:04:20.721 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:20.721 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:20.721 EAL: Ask a virtual area of 0x400000000 bytes 00:04:20.721 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:20.721 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:20.721 EAL: Hugepages will be freed exactly as allocated. 00:04:20.721 EAL: No shared files mode enabled, IPC is disabled 00:04:20.721 EAL: No shared files mode enabled, IPC is disabled 00:04:20.984 EAL: TSC frequency is ~2490000 KHz 00:04:20.984 EAL: Main lcore 0 is ready (tid=7f7a3729aa40;cpuset=[0]) 00:04:20.984 EAL: Trying to obtain current memory policy. 00:04:20.984 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:20.984 EAL: Restoring previous memory policy: 0 00:04:20.984 EAL: request: mp_malloc_sync 00:04:20.984 EAL: No shared files mode enabled, IPC is disabled 00:04:20.984 EAL: Heap on socket 0 was expanded by 2MB 00:04:20.984 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:20.984 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:20.984 EAL: Mem event callback 'spdk:(nil)' registered 00:04:20.984 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:20.984 00:04:20.984 00:04:20.984 CUnit - A unit testing framework for C - Version 2.1-3 00:04:20.984 http://cunit.sourceforge.net/ 00:04:20.984 00:04:20.984 00:04:20.984 Suite: components_suite 00:04:21.552 Test: vtophys_malloc_test ...passed 00:04:21.552 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 4MB 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was shrunk by 4MB 00:04:21.552 EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 6MB 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was shrunk by 6MB 00:04:21.552 EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 10MB 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was shrunk by 10MB 00:04:21.552 EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 18MB 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was shrunk by 18MB 00:04:21.552 EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 34MB 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was shrunk by 34MB 00:04:21.552 EAL: Trying to obtain current memory policy. 00:04:21.552 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.552 EAL: Restoring previous memory policy: 4 00:04:21.552 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.552 EAL: request: mp_malloc_sync 00:04:21.552 EAL: No shared files mode enabled, IPC is disabled 00:04:21.552 EAL: Heap on socket 0 was expanded by 66MB 00:04:21.812 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.812 EAL: request: mp_malloc_sync 00:04:21.812 EAL: No shared files mode enabled, IPC is disabled 00:04:21.812 EAL: Heap on socket 0 was shrunk by 66MB 00:04:21.812 EAL: Trying to obtain current memory policy. 00:04:21.812 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:21.812 EAL: Restoring previous memory policy: 4 00:04:21.812 EAL: Calling mem event callback 'spdk:(nil)' 00:04:21.812 EAL: request: mp_malloc_sync 00:04:21.812 EAL: No shared files mode enabled, IPC is disabled 00:04:21.813 EAL: Heap on socket 0 was expanded by 130MB 00:04:22.071 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.071 EAL: request: mp_malloc_sync 00:04:22.071 EAL: No shared files mode enabled, IPC is disabled 00:04:22.071 EAL: Heap on socket 0 was shrunk by 130MB 00:04:22.330 EAL: Trying to obtain current memory policy. 00:04:22.330 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:22.330 EAL: Restoring previous memory policy: 4 00:04:22.330 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.330 EAL: request: mp_malloc_sync 00:04:22.330 EAL: No shared files mode enabled, IPC is disabled 00:04:22.330 EAL: Heap on socket 0 was expanded by 258MB 00:04:22.897 EAL: Calling mem event callback 'spdk:(nil)' 00:04:22.897 EAL: request: mp_malloc_sync 00:04:22.897 EAL: No shared files mode enabled, IPC is disabled 00:04:22.897 EAL: Heap on socket 0 was shrunk by 258MB 00:04:23.576 EAL: Trying to obtain current memory policy. 00:04:23.576 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:23.576 EAL: Restoring previous memory policy: 4 00:04:23.576 EAL: Calling mem event callback 'spdk:(nil)' 00:04:23.576 EAL: request: mp_malloc_sync 00:04:23.576 EAL: No shared files mode enabled, IPC is disabled 00:04:23.576 EAL: Heap on socket 0 was expanded by 514MB 00:04:24.514 EAL: Calling mem event callback 'spdk:(nil)' 00:04:24.514 EAL: request: mp_malloc_sync 00:04:24.514 EAL: No shared files mode enabled, IPC is disabled 00:04:24.514 EAL: Heap on socket 0 was shrunk by 514MB 00:04:25.452 EAL: Trying to obtain current memory policy. 00:04:25.452 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:25.452 EAL: Restoring previous memory policy: 4 00:04:25.452 EAL: Calling mem event callback 'spdk:(nil)' 00:04:25.452 EAL: request: mp_malloc_sync 00:04:25.452 EAL: No shared files mode enabled, IPC is disabled 00:04:25.452 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.355 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.355 EAL: request: mp_malloc_sync 00:04:27.355 EAL: No shared files mode enabled, IPC is disabled 00:04:27.355 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.264 passed 00:04:29.265 00:04:29.265 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.265 suites 1 1 n/a 0 0 00:04:29.265 tests 2 2 2 0 0 00:04:29.265 asserts 5838 5838 5838 0 n/a 00:04:29.265 00:04:29.265 Elapsed time = 8.174 seconds 00:04:29.265 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.265 EAL: request: mp_malloc_sync 00:04:29.265 EAL: No shared files mode enabled, IPC is disabled 00:04:29.265 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.265 EAL: No shared files mode enabled, IPC is disabled 00:04:29.265 EAL: No shared files mode enabled, IPC is disabled 00:04:29.265 EAL: No shared files mode enabled, IPC is disabled 00:04:29.265 00:04:29.265 real 0m8.520s 00:04:29.265 user 0m7.455s 00:04:29.265 sys 0m0.903s 00:04:29.265 17:38:56 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.265 17:38:56 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.265 ************************************ 00:04:29.265 END TEST env_vtophys 00:04:29.265 ************************************ 00:04:29.265 17:38:56 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.265 17:38:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.265 17:38:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.265 17:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.265 ************************************ 00:04:29.265 START TEST env_pci 00:04:29.265 ************************************ 00:04:29.265 17:38:56 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:29.265 00:04:29.265 00:04:29.265 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.265 http://cunit.sourceforge.net/ 00:04:29.265 00:04:29.265 00:04:29.265 Suite: pci 00:04:29.265 Test: pci_hook ...[2024-11-20 17:38:56.363637] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57636 has claimed it 00:04:29.265 EAL: Cannot find device (10000:00:01.0) 00:04:29.265 EAL: Failed to attach device on primary process 00:04:29.265 passed 00:04:29.265 00:04:29.265 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.265 suites 1 1 n/a 0 0 00:04:29.265 tests 1 1 1 0 0 00:04:29.265 asserts 25 25 25 0 n/a 00:04:29.265 00:04:29.265 Elapsed time = 0.011 seconds 00:04:29.265 00:04:29.265 real 0m0.124s 00:04:29.265 user 0m0.058s 00:04:29.265 sys 0m0.064s 00:04:29.265 17:38:56 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.524 17:38:56 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.524 ************************************ 00:04:29.524 END TEST env_pci 00:04:29.524 ************************************ 00:04:29.524 17:38:56 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.524 17:38:56 env -- env/env.sh@15 -- # uname 00:04:29.524 17:38:56 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.524 17:38:56 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.524 17:38:56 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.524 17:38:56 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:29.524 17:38:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.524 17:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.524 ************************************ 00:04:29.524 START TEST env_dpdk_post_init 00:04:29.524 ************************************ 00:04:29.524 17:38:56 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.524 EAL: Detected CPU lcores: 10 00:04:29.524 EAL: Detected NUMA nodes: 1 00:04:29.524 EAL: Detected shared linkage of DPDK 00:04:29.524 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.524 EAL: Selected IOVA mode 'PA' 00:04:29.784 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.784 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:29.784 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:29.784 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:29.784 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:29.784 Starting DPDK initialization... 00:04:29.784 Starting SPDK post initialization... 00:04:29.784 SPDK NVMe probe 00:04:29.784 Attaching to 0000:00:10.0 00:04:29.784 Attaching to 0000:00:11.0 00:04:29.784 Attaching to 0000:00:12.0 00:04:29.784 Attaching to 0000:00:13.0 00:04:29.784 Attached to 0000:00:10.0 00:04:29.784 Attached to 0000:00:11.0 00:04:29.784 Attached to 0000:00:13.0 00:04:29.784 Attached to 0000:00:12.0 00:04:29.784 Cleaning up... 00:04:29.784 00:04:29.784 real 0m0.325s 00:04:29.784 user 0m0.107s 00:04:29.784 sys 0m0.121s 00:04:29.784 17:38:56 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.784 17:38:56 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.784 ************************************ 00:04:29.784 END TEST env_dpdk_post_init 00:04:29.784 ************************************ 00:04:29.784 17:38:56 env -- env/env.sh@26 -- # uname 00:04:29.784 17:38:56 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:29.784 17:38:56 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:29.784 17:38:56 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.784 17:38:56 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.784 17:38:56 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.784 ************************************ 00:04:29.784 START TEST env_mem_callbacks 00:04:29.784 ************************************ 00:04:29.784 17:38:56 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:30.044 EAL: Detected CPU lcores: 10 00:04:30.044 EAL: Detected NUMA nodes: 1 00:04:30.044 EAL: Detected shared linkage of DPDK 00:04:30.044 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:30.044 EAL: Selected IOVA mode 'PA' 00:04:30.044 00:04:30.044 00:04:30.044 CUnit - A unit testing framework for C - Version 2.1-3 00:04:30.044 http://cunit.sourceforge.net/ 00:04:30.044 00:04:30.044 00:04:30.044 Suite: memory 00:04:30.044 Test: test ... 00:04:30.044 register 0x200000200000 2097152 00:04:30.044 malloc 3145728 00:04:30.044 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:30.044 register 0x200000400000 4194304 00:04:30.044 buf 0x2000004fffc0 len 3145728 PASSED 00:04:30.044 malloc 64 00:04:30.044 buf 0x2000004ffec0 len 64 PASSED 00:04:30.044 malloc 4194304 00:04:30.044 register 0x200000800000 6291456 00:04:30.044 buf 0x2000009fffc0 len 4194304 PASSED 00:04:30.044 free 0x2000004fffc0 3145728 00:04:30.044 free 0x2000004ffec0 64 00:04:30.044 unregister 0x200000400000 4194304 PASSED 00:04:30.044 free 0x2000009fffc0 4194304 00:04:30.044 unregister 0x200000800000 6291456 PASSED 00:04:30.044 malloc 8388608 00:04:30.044 register 0x200000400000 10485760 00:04:30.044 buf 0x2000005fffc0 len 8388608 PASSED 00:04:30.044 free 0x2000005fffc0 8388608 00:04:30.044 unregister 0x200000400000 10485760 PASSED 00:04:30.044 passed 00:04:30.044 00:04:30.044 Run Summary: Type Total Ran Passed Failed Inactive 00:04:30.044 suites 1 1 n/a 0 0 00:04:30.044 tests 1 1 1 0 0 00:04:30.044 asserts 15 15 15 0 n/a 00:04:30.044 00:04:30.044 Elapsed time = 0.069 seconds 00:04:30.044 00:04:30.044 real 0m0.273s 00:04:30.044 user 0m0.104s 00:04:30.044 sys 0m0.067s 00:04:30.044 17:38:57 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.044 ************************************ 00:04:30.044 END TEST env_mem_callbacks 00:04:30.044 17:38:57 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:30.044 ************************************ 00:04:30.301 ************************************ 00:04:30.301 END TEST env 00:04:30.301 ************************************ 00:04:30.301 00:04:30.301 real 0m10.197s 00:04:30.301 user 0m8.268s 00:04:30.301 sys 0m1.561s 00:04:30.301 17:38:57 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.301 17:38:57 env -- common/autotest_common.sh@10 -- # set +x 00:04:30.301 17:38:57 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.301 17:38:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.301 17:38:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.301 17:38:57 -- common/autotest_common.sh@10 -- # set +x 00:04:30.301 ************************************ 00:04:30.301 START TEST rpc 00:04:30.301 ************************************ 00:04:30.301 17:38:57 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:30.301 * Looking for test storage... 00:04:30.301 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.301 17:38:57 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.301 17:38:57 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.301 17:38:57 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.563 17:38:57 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.563 17:38:57 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.563 17:38:57 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.563 17:38:57 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.563 17:38:57 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.563 17:38:57 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.563 17:38:57 rpc -- scripts/common.sh@345 -- # : 1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.563 17:38:57 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.563 17:38:57 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.563 17:38:57 rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.563 17:38:57 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.563 17:38:57 rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.563 17:38:57 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.563 17:38:57 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.563 17:38:57 rpc -- scripts/common.sh@368 -- # return 0 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.563 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.563 --rc genhtml_branch_coverage=1 00:04:30.563 --rc genhtml_function_coverage=1 00:04:30.563 --rc genhtml_legend=1 00:04:30.563 --rc geninfo_all_blocks=1 00:04:30.563 --rc geninfo_unexecuted_blocks=1 00:04:30.563 00:04:30.563 ' 00:04:30.563 17:38:57 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57770 00:04:30.563 17:38:57 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.563 17:38:57 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57770 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@835 -- # '[' -z 57770 ']' 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.563 17:38:57 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.563 17:38:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.563 [2024-11-20 17:38:57.647294] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:30.563 [2024-11-20 17:38:57.647414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57770 ] 00:04:30.825 [2024-11-20 17:38:57.830690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.825 [2024-11-20 17:38:57.960090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:30.825 [2024-11-20 17:38:57.960152] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57770' to capture a snapshot of events at runtime. 00:04:30.825 [2024-11-20 17:38:57.960165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:30.825 [2024-11-20 17:38:57.960180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:30.825 [2024-11-20 17:38:57.960190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57770 for offline analysis/debug. 00:04:30.825 [2024-11-20 17:38:57.961594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.762 17:38:58 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.762 17:38:58 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:31.762 17:38:58 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.762 17:38:58 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:31.762 17:38:58 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:31.762 17:38:58 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:31.762 17:38:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:31.762 17:38:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:31.762 17:38:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.762 ************************************ 00:04:31.762 START TEST rpc_integrity 00:04:31.762 ************************************ 00:04:31.762 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:31.762 17:38:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:31.762 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:31.762 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.022 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.022 17:38:58 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.022 17:38:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.022 17:38:58 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.022 17:38:58 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.022 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.022 17:38:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.022 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.022 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:32.022 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:32.022 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.022 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.022 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.022 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:32.022 { 00:04:32.022 "name": "Malloc0", 00:04:32.022 "aliases": [ 00:04:32.022 "de5e1d2d-90f5-4a2a-a9ca-1e61a5436279" 00:04:32.022 ], 00:04:32.022 "product_name": "Malloc disk", 00:04:32.022 "block_size": 512, 00:04:32.022 "num_blocks": 16384, 00:04:32.022 "uuid": "de5e1d2d-90f5-4a2a-a9ca-1e61a5436279", 00:04:32.022 "assigned_rate_limits": { 00:04:32.022 "rw_ios_per_sec": 0, 00:04:32.022 "rw_mbytes_per_sec": 0, 00:04:32.022 "r_mbytes_per_sec": 0, 00:04:32.022 "w_mbytes_per_sec": 0 00:04:32.022 }, 00:04:32.022 "claimed": false, 00:04:32.022 "zoned": false, 00:04:32.022 "supported_io_types": { 00:04:32.022 "read": true, 00:04:32.022 "write": true, 00:04:32.022 "unmap": true, 00:04:32.022 "flush": true, 00:04:32.022 "reset": true, 00:04:32.022 "nvme_admin": false, 00:04:32.022 "nvme_io": false, 00:04:32.022 "nvme_io_md": false, 00:04:32.022 "write_zeroes": true, 00:04:32.022 "zcopy": true, 00:04:32.022 "get_zone_info": false, 00:04:32.022 "zone_management": false, 00:04:32.022 "zone_append": false, 00:04:32.022 "compare": false, 00:04:32.022 "compare_and_write": false, 00:04:32.022 "abort": true, 00:04:32.022 "seek_hole": false, 00:04:32.022 "seek_data": false, 00:04:32.022 "copy": true, 00:04:32.022 "nvme_iov_md": false 00:04:32.022 }, 00:04:32.022 "memory_domains": [ 00:04:32.022 { 00:04:32.022 "dma_device_id": "system", 00:04:32.022 "dma_device_type": 1 00:04:32.022 }, 00:04:32.022 { 00:04:32.022 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.022 "dma_device_type": 2 00:04:32.022 } 00:04:32.022 ], 00:04:32.022 "driver_specific": {} 00:04:32.022 } 00:04:32.022 ]' 00:04:32.022 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:32.022 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.023 [2024-11-20 17:38:59.080729] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:32.023 [2024-11-20 17:38:59.080808] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:32.023 [2024-11-20 17:38:59.080856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:32.023 [2024-11-20 17:38:59.080876] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:32.023 [2024-11-20 17:38:59.083521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:32.023 [2024-11-20 17:38:59.083568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:32.023 Passthru0 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:32.023 { 00:04:32.023 "name": "Malloc0", 00:04:32.023 "aliases": [ 00:04:32.023 "de5e1d2d-90f5-4a2a-a9ca-1e61a5436279" 00:04:32.023 ], 00:04:32.023 "product_name": "Malloc disk", 00:04:32.023 "block_size": 512, 00:04:32.023 "num_blocks": 16384, 00:04:32.023 "uuid": "de5e1d2d-90f5-4a2a-a9ca-1e61a5436279", 00:04:32.023 "assigned_rate_limits": { 00:04:32.023 "rw_ios_per_sec": 0, 00:04:32.023 "rw_mbytes_per_sec": 0, 00:04:32.023 "r_mbytes_per_sec": 0, 00:04:32.023 "w_mbytes_per_sec": 0 00:04:32.023 }, 00:04:32.023 "claimed": true, 00:04:32.023 "claim_type": "exclusive_write", 00:04:32.023 "zoned": false, 00:04:32.023 "supported_io_types": { 00:04:32.023 "read": true, 00:04:32.023 "write": true, 00:04:32.023 "unmap": true, 00:04:32.023 "flush": true, 00:04:32.023 "reset": true, 00:04:32.023 "nvme_admin": false, 00:04:32.023 "nvme_io": false, 00:04:32.023 "nvme_io_md": false, 00:04:32.023 "write_zeroes": true, 00:04:32.023 "zcopy": true, 00:04:32.023 "get_zone_info": false, 00:04:32.023 "zone_management": false, 00:04:32.023 "zone_append": false, 00:04:32.023 "compare": false, 00:04:32.023 "compare_and_write": false, 00:04:32.023 "abort": true, 00:04:32.023 "seek_hole": false, 00:04:32.023 "seek_data": false, 00:04:32.023 "copy": true, 00:04:32.023 "nvme_iov_md": false 00:04:32.023 }, 00:04:32.023 "memory_domains": [ 00:04:32.023 { 00:04:32.023 "dma_device_id": "system", 00:04:32.023 "dma_device_type": 1 00:04:32.023 }, 00:04:32.023 { 00:04:32.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.023 "dma_device_type": 2 00:04:32.023 } 00:04:32.023 ], 00:04:32.023 "driver_specific": {} 00:04:32.023 }, 00:04:32.023 { 00:04:32.023 "name": "Passthru0", 00:04:32.023 "aliases": [ 00:04:32.023 "66923b26-8083-5082-a89c-060a8c0d1418" 00:04:32.023 ], 00:04:32.023 "product_name": "passthru", 00:04:32.023 "block_size": 512, 00:04:32.023 "num_blocks": 16384, 00:04:32.023 "uuid": "66923b26-8083-5082-a89c-060a8c0d1418", 00:04:32.023 "assigned_rate_limits": { 00:04:32.023 "rw_ios_per_sec": 0, 00:04:32.023 "rw_mbytes_per_sec": 0, 00:04:32.023 "r_mbytes_per_sec": 0, 00:04:32.023 "w_mbytes_per_sec": 0 00:04:32.023 }, 00:04:32.023 "claimed": false, 00:04:32.023 "zoned": false, 00:04:32.023 "supported_io_types": { 00:04:32.023 "read": true, 00:04:32.023 "write": true, 00:04:32.023 "unmap": true, 00:04:32.023 "flush": true, 00:04:32.023 "reset": true, 00:04:32.023 "nvme_admin": false, 00:04:32.023 "nvme_io": false, 00:04:32.023 "nvme_io_md": false, 00:04:32.023 "write_zeroes": true, 00:04:32.023 "zcopy": true, 00:04:32.023 "get_zone_info": false, 00:04:32.023 "zone_management": false, 00:04:32.023 "zone_append": false, 00:04:32.023 "compare": false, 00:04:32.023 "compare_and_write": false, 00:04:32.023 "abort": true, 00:04:32.023 "seek_hole": false, 00:04:32.023 "seek_data": false, 00:04:32.023 "copy": true, 00:04:32.023 "nvme_iov_md": false 00:04:32.023 }, 00:04:32.023 "memory_domains": [ 00:04:32.023 { 00:04:32.023 "dma_device_id": "system", 00:04:32.023 "dma_device_type": 1 00:04:32.023 }, 00:04:32.023 { 00:04:32.023 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.023 "dma_device_type": 2 00:04:32.023 } 00:04:32.023 ], 00:04:32.023 "driver_specific": { 00:04:32.023 "passthru": { 00:04:32.023 "name": "Passthru0", 00:04:32.023 "base_bdev_name": "Malloc0" 00:04:32.023 } 00:04:32.023 } 00:04:32.023 } 00:04:32.023 ]' 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.023 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.023 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.282 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.282 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:32.282 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:32.282 17:38:59 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:32.282 00:04:32.282 real 0m0.344s 00:04:32.282 user 0m0.187s 00:04:32.282 sys 0m0.060s 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.282 17:38:59 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 ************************************ 00:04:32.282 END TEST rpc_integrity 00:04:32.282 ************************************ 00:04:32.282 17:38:59 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:32.282 17:38:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.282 17:38:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.282 17:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 ************************************ 00:04:32.282 START TEST rpc_plugins 00:04:32.282 ************************************ 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:32.282 { 00:04:32.282 "name": "Malloc1", 00:04:32.282 "aliases": [ 00:04:32.282 "90dfec45-47ea-4090-a276-bbbfd98d7064" 00:04:32.282 ], 00:04:32.282 "product_name": "Malloc disk", 00:04:32.282 "block_size": 4096, 00:04:32.282 "num_blocks": 256, 00:04:32.282 "uuid": "90dfec45-47ea-4090-a276-bbbfd98d7064", 00:04:32.282 "assigned_rate_limits": { 00:04:32.282 "rw_ios_per_sec": 0, 00:04:32.282 "rw_mbytes_per_sec": 0, 00:04:32.282 "r_mbytes_per_sec": 0, 00:04:32.282 "w_mbytes_per_sec": 0 00:04:32.282 }, 00:04:32.282 "claimed": false, 00:04:32.282 "zoned": false, 00:04:32.282 "supported_io_types": { 00:04:32.282 "read": true, 00:04:32.282 "write": true, 00:04:32.282 "unmap": true, 00:04:32.282 "flush": true, 00:04:32.282 "reset": true, 00:04:32.282 "nvme_admin": false, 00:04:32.282 "nvme_io": false, 00:04:32.282 "nvme_io_md": false, 00:04:32.282 "write_zeroes": true, 00:04:32.282 "zcopy": true, 00:04:32.282 "get_zone_info": false, 00:04:32.282 "zone_management": false, 00:04:32.282 "zone_append": false, 00:04:32.282 "compare": false, 00:04:32.282 "compare_and_write": false, 00:04:32.282 "abort": true, 00:04:32.282 "seek_hole": false, 00:04:32.282 "seek_data": false, 00:04:32.282 "copy": true, 00:04:32.282 "nvme_iov_md": false 00:04:32.282 }, 00:04:32.282 "memory_domains": [ 00:04:32.282 { 00:04:32.282 "dma_device_id": "system", 00:04:32.282 "dma_device_type": 1 00:04:32.282 }, 00:04:32.282 { 00:04:32.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:32.282 "dma_device_type": 2 00:04:32.282 } 00:04:32.282 ], 00:04:32.282 "driver_specific": {} 00:04:32.282 } 00:04:32.282 ]' 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:32.282 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.282 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.541 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:32.541 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.541 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.541 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.541 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:32.541 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:32.541 17:38:59 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:32.541 00:04:32.541 real 0m0.171s 00:04:32.541 user 0m0.087s 00:04:32.541 sys 0m0.038s 00:04:32.541 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.541 ************************************ 00:04:32.541 END TEST rpc_plugins 00:04:32.541 ************************************ 00:04:32.541 17:38:59 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:32.541 17:38:59 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:32.541 17:38:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.541 17:38:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.541 17:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.541 ************************************ 00:04:32.541 START TEST rpc_trace_cmd_test 00:04:32.541 ************************************ 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.541 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:32.541 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57770", 00:04:32.541 "tpoint_group_mask": "0x8", 00:04:32.541 "iscsi_conn": { 00:04:32.541 "mask": "0x2", 00:04:32.541 "tpoint_mask": "0x0" 00:04:32.541 }, 00:04:32.541 "scsi": { 00:04:32.541 "mask": "0x4", 00:04:32.541 "tpoint_mask": "0x0" 00:04:32.541 }, 00:04:32.541 "bdev": { 00:04:32.542 "mask": "0x8", 00:04:32.542 "tpoint_mask": "0xffffffffffffffff" 00:04:32.542 }, 00:04:32.542 "nvmf_rdma": { 00:04:32.542 "mask": "0x10", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "nvmf_tcp": { 00:04:32.542 "mask": "0x20", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "ftl": { 00:04:32.542 "mask": "0x40", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "blobfs": { 00:04:32.542 "mask": "0x80", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "dsa": { 00:04:32.542 "mask": "0x200", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "thread": { 00:04:32.542 "mask": "0x400", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "nvme_pcie": { 00:04:32.542 "mask": "0x800", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "iaa": { 00:04:32.542 "mask": "0x1000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "nvme_tcp": { 00:04:32.542 "mask": "0x2000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "bdev_nvme": { 00:04:32.542 "mask": "0x4000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "sock": { 00:04:32.542 "mask": "0x8000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "blob": { 00:04:32.542 "mask": "0x10000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "bdev_raid": { 00:04:32.542 "mask": "0x20000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 }, 00:04:32.542 "scheduler": { 00:04:32.542 "mask": "0x40000", 00:04:32.542 "tpoint_mask": "0x0" 00:04:32.542 } 00:04:32.542 }' 00:04:32.542 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:32.542 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:32.542 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:32.542 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:32.542 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:32.801 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:32.801 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:32.802 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:32.802 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:32.802 17:38:59 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:32.802 00:04:32.802 real 0m0.255s 00:04:32.802 user 0m0.208s 00:04:32.802 sys 0m0.036s 00:04:32.802 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.802 17:38:59 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:32.802 ************************************ 00:04:32.802 END TEST rpc_trace_cmd_test 00:04:32.802 ************************************ 00:04:32.802 17:38:59 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:32.802 17:38:59 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:32.802 17:38:59 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:32.802 17:38:59 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.802 17:38:59 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.802 17:38:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.802 ************************************ 00:04:32.802 START TEST rpc_daemon_integrity 00:04:32.802 ************************************ 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:32.802 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.062 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.062 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:33.062 17:38:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:33.062 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.062 17:38:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:33.062 { 00:04:33.062 "name": "Malloc2", 00:04:33.062 "aliases": [ 00:04:33.062 "8b174720-c0b0-4c26-8e25-48388cc3ac8b" 00:04:33.062 ], 00:04:33.062 "product_name": "Malloc disk", 00:04:33.062 "block_size": 512, 00:04:33.062 "num_blocks": 16384, 00:04:33.062 "uuid": "8b174720-c0b0-4c26-8e25-48388cc3ac8b", 00:04:33.062 "assigned_rate_limits": { 00:04:33.062 "rw_ios_per_sec": 0, 00:04:33.062 "rw_mbytes_per_sec": 0, 00:04:33.062 "r_mbytes_per_sec": 0, 00:04:33.062 "w_mbytes_per_sec": 0 00:04:33.062 }, 00:04:33.062 "claimed": false, 00:04:33.062 "zoned": false, 00:04:33.062 "supported_io_types": { 00:04:33.062 "read": true, 00:04:33.062 "write": true, 00:04:33.062 "unmap": true, 00:04:33.062 "flush": true, 00:04:33.062 "reset": true, 00:04:33.062 "nvme_admin": false, 00:04:33.062 "nvme_io": false, 00:04:33.062 "nvme_io_md": false, 00:04:33.062 "write_zeroes": true, 00:04:33.062 "zcopy": true, 00:04:33.062 "get_zone_info": false, 00:04:33.062 "zone_management": false, 00:04:33.062 "zone_append": false, 00:04:33.062 "compare": false, 00:04:33.062 "compare_and_write": false, 00:04:33.062 "abort": true, 00:04:33.062 "seek_hole": false, 00:04:33.062 "seek_data": false, 00:04:33.062 "copy": true, 00:04:33.062 "nvme_iov_md": false 00:04:33.062 }, 00:04:33.062 "memory_domains": [ 00:04:33.062 { 00:04:33.062 "dma_device_id": "system", 00:04:33.062 "dma_device_type": 1 00:04:33.062 }, 00:04:33.062 { 00:04:33.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.062 "dma_device_type": 2 00:04:33.062 } 00:04:33.062 ], 00:04:33.062 "driver_specific": {} 00:04:33.062 } 00:04:33.062 ]' 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.062 [2024-11-20 17:39:00.064876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:33.062 [2024-11-20 17:39:00.064945] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:33.062 [2024-11-20 17:39:00.064970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:33.062 [2024-11-20 17:39:00.064985] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:33.062 [2024-11-20 17:39:00.067578] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:33.062 [2024-11-20 17:39:00.067624] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:33.062 Passthru0 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.062 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:33.062 { 00:04:33.062 "name": "Malloc2", 00:04:33.062 "aliases": [ 00:04:33.062 "8b174720-c0b0-4c26-8e25-48388cc3ac8b" 00:04:33.062 ], 00:04:33.062 "product_name": "Malloc disk", 00:04:33.062 "block_size": 512, 00:04:33.062 "num_blocks": 16384, 00:04:33.062 "uuid": "8b174720-c0b0-4c26-8e25-48388cc3ac8b", 00:04:33.062 "assigned_rate_limits": { 00:04:33.062 "rw_ios_per_sec": 0, 00:04:33.062 "rw_mbytes_per_sec": 0, 00:04:33.062 "r_mbytes_per_sec": 0, 00:04:33.062 "w_mbytes_per_sec": 0 00:04:33.062 }, 00:04:33.062 "claimed": true, 00:04:33.062 "claim_type": "exclusive_write", 00:04:33.062 "zoned": false, 00:04:33.062 "supported_io_types": { 00:04:33.062 "read": true, 00:04:33.062 "write": true, 00:04:33.062 "unmap": true, 00:04:33.062 "flush": true, 00:04:33.062 "reset": true, 00:04:33.062 "nvme_admin": false, 00:04:33.062 "nvme_io": false, 00:04:33.062 "nvme_io_md": false, 00:04:33.063 "write_zeroes": true, 00:04:33.063 "zcopy": true, 00:04:33.063 "get_zone_info": false, 00:04:33.063 "zone_management": false, 00:04:33.063 "zone_append": false, 00:04:33.063 "compare": false, 00:04:33.063 "compare_and_write": false, 00:04:33.063 "abort": true, 00:04:33.063 "seek_hole": false, 00:04:33.063 "seek_data": false, 00:04:33.063 "copy": true, 00:04:33.063 "nvme_iov_md": false 00:04:33.063 }, 00:04:33.063 "memory_domains": [ 00:04:33.063 { 00:04:33.063 "dma_device_id": "system", 00:04:33.063 "dma_device_type": 1 00:04:33.063 }, 00:04:33.063 { 00:04:33.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.063 "dma_device_type": 2 00:04:33.063 } 00:04:33.063 ], 00:04:33.063 "driver_specific": {} 00:04:33.063 }, 00:04:33.063 { 00:04:33.063 "name": "Passthru0", 00:04:33.063 "aliases": [ 00:04:33.063 "8058634c-c337-5aed-998a-83c455061741" 00:04:33.063 ], 00:04:33.063 "product_name": "passthru", 00:04:33.063 "block_size": 512, 00:04:33.063 "num_blocks": 16384, 00:04:33.063 "uuid": "8058634c-c337-5aed-998a-83c455061741", 00:04:33.063 "assigned_rate_limits": { 00:04:33.063 "rw_ios_per_sec": 0, 00:04:33.063 "rw_mbytes_per_sec": 0, 00:04:33.063 "r_mbytes_per_sec": 0, 00:04:33.063 "w_mbytes_per_sec": 0 00:04:33.063 }, 00:04:33.063 "claimed": false, 00:04:33.063 "zoned": false, 00:04:33.063 "supported_io_types": { 00:04:33.063 "read": true, 00:04:33.063 "write": true, 00:04:33.063 "unmap": true, 00:04:33.063 "flush": true, 00:04:33.063 "reset": true, 00:04:33.063 "nvme_admin": false, 00:04:33.063 "nvme_io": false, 00:04:33.063 "nvme_io_md": false, 00:04:33.063 "write_zeroes": true, 00:04:33.063 "zcopy": true, 00:04:33.063 "get_zone_info": false, 00:04:33.063 "zone_management": false, 00:04:33.063 "zone_append": false, 00:04:33.063 "compare": false, 00:04:33.063 "compare_and_write": false, 00:04:33.063 "abort": true, 00:04:33.063 "seek_hole": false, 00:04:33.063 "seek_data": false, 00:04:33.063 "copy": true, 00:04:33.063 "nvme_iov_md": false 00:04:33.063 }, 00:04:33.063 "memory_domains": [ 00:04:33.063 { 00:04:33.063 "dma_device_id": "system", 00:04:33.063 "dma_device_type": 1 00:04:33.063 }, 00:04:33.063 { 00:04:33.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:33.063 "dma_device_type": 2 00:04:33.063 } 00:04:33.063 ], 00:04:33.063 "driver_specific": { 00:04:33.063 "passthru": { 00:04:33.063 "name": "Passthru0", 00:04:33.063 "base_bdev_name": "Malloc2" 00:04:33.063 } 00:04:33.063 } 00:04:33.063 } 00:04:33.063 ]' 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:33.063 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:33.323 17:39:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:33.323 00:04:33.323 real 0m0.373s 00:04:33.323 user 0m0.210s 00:04:33.323 sys 0m0.064s 00:04:33.323 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.323 ************************************ 00:04:33.323 END TEST rpc_daemon_integrity 00:04:33.323 ************************************ 00:04:33.323 17:39:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:33.323 17:39:00 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:33.323 17:39:00 rpc -- rpc/rpc.sh@84 -- # killprocess 57770 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@954 -- # '[' -z 57770 ']' 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@958 -- # kill -0 57770 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@959 -- # uname 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57770 00:04:33.323 killing process with pid 57770 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57770' 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@973 -- # kill 57770 00:04:33.323 17:39:00 rpc -- common/autotest_common.sh@978 -- # wait 57770 00:04:35.857 00:04:35.857 real 0m5.494s 00:04:35.857 user 0m6.018s 00:04:35.857 sys 0m1.001s 00:04:35.857 17:39:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.857 17:39:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.857 ************************************ 00:04:35.857 END TEST rpc 00:04:35.857 ************************************ 00:04:35.857 17:39:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.857 17:39:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.857 17:39:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.857 17:39:02 -- common/autotest_common.sh@10 -- # set +x 00:04:35.857 ************************************ 00:04:35.857 START TEST skip_rpc 00:04:35.857 ************************************ 00:04:35.857 17:39:02 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:35.857 * Looking for test storage... 00:04:35.857 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:35.857 17:39:03 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:35.857 17:39:03 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:35.858 17:39:03 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:36.117 17:39:03 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.117 17:39:03 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.117 17:39:03 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.117 17:39:03 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.118 --rc genhtml_branch_coverage=1 00:04:36.118 --rc genhtml_function_coverage=1 00:04:36.118 --rc genhtml_legend=1 00:04:36.118 --rc geninfo_all_blocks=1 00:04:36.118 --rc geninfo_unexecuted_blocks=1 00:04:36.118 00:04:36.118 ' 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.118 --rc genhtml_branch_coverage=1 00:04:36.118 --rc genhtml_function_coverage=1 00:04:36.118 --rc genhtml_legend=1 00:04:36.118 --rc geninfo_all_blocks=1 00:04:36.118 --rc geninfo_unexecuted_blocks=1 00:04:36.118 00:04:36.118 ' 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.118 --rc genhtml_branch_coverage=1 00:04:36.118 --rc genhtml_function_coverage=1 00:04:36.118 --rc genhtml_legend=1 00:04:36.118 --rc geninfo_all_blocks=1 00:04:36.118 --rc geninfo_unexecuted_blocks=1 00:04:36.118 00:04:36.118 ' 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:36.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.118 --rc genhtml_branch_coverage=1 00:04:36.118 --rc genhtml_function_coverage=1 00:04:36.118 --rc genhtml_legend=1 00:04:36.118 --rc geninfo_all_blocks=1 00:04:36.118 --rc geninfo_unexecuted_blocks=1 00:04:36.118 00:04:36.118 ' 00:04:36.118 17:39:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.118 17:39:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:36.118 17:39:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.118 17:39:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.118 ************************************ 00:04:36.118 START TEST skip_rpc 00:04:36.118 ************************************ 00:04:36.118 17:39:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:36.118 17:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57999 00:04:36.118 17:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.118 17:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.118 17:39:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.118 [2024-11-20 17:39:03.244123] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:36.118 [2024-11-20 17:39:03.244283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57999 ] 00:04:36.377 [2024-11-20 17:39:03.423259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.377 [2024-11-20 17:39:03.538429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57999 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57999 ']' 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57999 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57999 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.655 killing process with pid 57999 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57999' 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57999 00:04:41.655 17:39:08 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57999 00:04:43.556 00:04:43.556 real 0m7.473s 00:04:43.556 user 0m6.966s 00:04:43.556 sys 0m0.417s 00:04:43.556 17:39:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.556 ************************************ 00:04:43.556 END TEST skip_rpc 00:04:43.556 ************************************ 00:04:43.556 17:39:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.556 17:39:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:43.556 17:39:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.556 17:39:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.556 17:39:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.556 ************************************ 00:04:43.556 START TEST skip_rpc_with_json 00:04:43.556 ************************************ 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58103 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58103 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58103 ']' 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.556 17:39:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:43.816 [2024-11-20 17:39:10.796180] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:43.816 [2024-11-20 17:39:10.796331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58103 ] 00:04:43.816 [2024-11-20 17:39:10.975803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.075 [2024-11-20 17:39:11.094222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.013 [2024-11-20 17:39:11.960113] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:45.013 request: 00:04:45.013 { 00:04:45.013 "trtype": "tcp", 00:04:45.013 "method": "nvmf_get_transports", 00:04:45.013 "req_id": 1 00:04:45.013 } 00:04:45.013 Got JSON-RPC error response 00:04:45.013 response: 00:04:45.013 { 00:04:45.013 "code": -19, 00:04:45.013 "message": "No such device" 00:04:45.013 } 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.013 [2024-11-20 17:39:11.972224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:45.013 17:39:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:45.013 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:45.013 17:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:45.013 { 00:04:45.013 "subsystems": [ 00:04:45.013 { 00:04:45.013 "subsystem": "fsdev", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "fsdev_set_opts", 00:04:45.013 "params": { 00:04:45.013 "fsdev_io_pool_size": 65535, 00:04:45.013 "fsdev_io_cache_size": 256 00:04:45.013 } 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "keyring", 00:04:45.013 "config": [] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "iobuf", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "iobuf_set_options", 00:04:45.013 "params": { 00:04:45.013 "small_pool_count": 8192, 00:04:45.013 "large_pool_count": 1024, 00:04:45.013 "small_bufsize": 8192, 00:04:45.013 "large_bufsize": 135168, 00:04:45.013 "enable_numa": false 00:04:45.013 } 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "sock", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "sock_set_default_impl", 00:04:45.013 "params": { 00:04:45.013 "impl_name": "posix" 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "sock_impl_set_options", 00:04:45.013 "params": { 00:04:45.013 "impl_name": "ssl", 00:04:45.013 "recv_buf_size": 4096, 00:04:45.013 "send_buf_size": 4096, 00:04:45.013 "enable_recv_pipe": true, 00:04:45.013 "enable_quickack": false, 00:04:45.013 "enable_placement_id": 0, 00:04:45.013 "enable_zerocopy_send_server": true, 00:04:45.013 "enable_zerocopy_send_client": false, 00:04:45.013 "zerocopy_threshold": 0, 00:04:45.013 "tls_version": 0, 00:04:45.013 "enable_ktls": false 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "sock_impl_set_options", 00:04:45.013 "params": { 00:04:45.013 "impl_name": "posix", 00:04:45.013 "recv_buf_size": 2097152, 00:04:45.013 "send_buf_size": 2097152, 00:04:45.013 "enable_recv_pipe": true, 00:04:45.013 "enable_quickack": false, 00:04:45.013 "enable_placement_id": 0, 00:04:45.013 "enable_zerocopy_send_server": true, 00:04:45.013 "enable_zerocopy_send_client": false, 00:04:45.013 "zerocopy_threshold": 0, 00:04:45.013 "tls_version": 0, 00:04:45.013 "enable_ktls": false 00:04:45.013 } 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "vmd", 00:04:45.013 "config": [] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "accel", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "accel_set_options", 00:04:45.013 "params": { 00:04:45.013 "small_cache_size": 128, 00:04:45.013 "large_cache_size": 16, 00:04:45.013 "task_count": 2048, 00:04:45.013 "sequence_count": 2048, 00:04:45.013 "buf_count": 2048 00:04:45.013 } 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "bdev", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "bdev_set_options", 00:04:45.013 "params": { 00:04:45.013 "bdev_io_pool_size": 65535, 00:04:45.013 "bdev_io_cache_size": 256, 00:04:45.013 "bdev_auto_examine": true, 00:04:45.013 "iobuf_small_cache_size": 128, 00:04:45.013 "iobuf_large_cache_size": 16 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "bdev_raid_set_options", 00:04:45.013 "params": { 00:04:45.013 "process_window_size_kb": 1024, 00:04:45.013 "process_max_bandwidth_mb_sec": 0 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "bdev_iscsi_set_options", 00:04:45.013 "params": { 00:04:45.013 "timeout_sec": 30 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "bdev_nvme_set_options", 00:04:45.013 "params": { 00:04:45.013 "action_on_timeout": "none", 00:04:45.013 "timeout_us": 0, 00:04:45.013 "timeout_admin_us": 0, 00:04:45.013 "keep_alive_timeout_ms": 10000, 00:04:45.013 "arbitration_burst": 0, 00:04:45.013 "low_priority_weight": 0, 00:04:45.013 "medium_priority_weight": 0, 00:04:45.013 "high_priority_weight": 0, 00:04:45.013 "nvme_adminq_poll_period_us": 10000, 00:04:45.013 "nvme_ioq_poll_period_us": 0, 00:04:45.013 "io_queue_requests": 0, 00:04:45.013 "delay_cmd_submit": true, 00:04:45.013 "transport_retry_count": 4, 00:04:45.013 "bdev_retry_count": 3, 00:04:45.013 "transport_ack_timeout": 0, 00:04:45.013 "ctrlr_loss_timeout_sec": 0, 00:04:45.013 "reconnect_delay_sec": 0, 00:04:45.013 "fast_io_fail_timeout_sec": 0, 00:04:45.013 "disable_auto_failback": false, 00:04:45.013 "generate_uuids": false, 00:04:45.013 "transport_tos": 0, 00:04:45.013 "nvme_error_stat": false, 00:04:45.013 "rdma_srq_size": 0, 00:04:45.013 "io_path_stat": false, 00:04:45.013 "allow_accel_sequence": false, 00:04:45.013 "rdma_max_cq_size": 0, 00:04:45.013 "rdma_cm_event_timeout_ms": 0, 00:04:45.013 "dhchap_digests": [ 00:04:45.013 "sha256", 00:04:45.013 "sha384", 00:04:45.013 "sha512" 00:04:45.013 ], 00:04:45.013 "dhchap_dhgroups": [ 00:04:45.013 "null", 00:04:45.013 "ffdhe2048", 00:04:45.013 "ffdhe3072", 00:04:45.013 "ffdhe4096", 00:04:45.013 "ffdhe6144", 00:04:45.013 "ffdhe8192" 00:04:45.013 ] 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "bdev_nvme_set_hotplug", 00:04:45.013 "params": { 00:04:45.013 "period_us": 100000, 00:04:45.013 "enable": false 00:04:45.013 } 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "method": "bdev_wait_for_examine" 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "scsi", 00:04:45.013 "config": null 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "scheduler", 00:04:45.013 "config": [ 00:04:45.013 { 00:04:45.013 "method": "framework_set_scheduler", 00:04:45.013 "params": { 00:04:45.013 "name": "static" 00:04:45.013 } 00:04:45.013 } 00:04:45.013 ] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "vhost_scsi", 00:04:45.013 "config": [] 00:04:45.013 }, 00:04:45.013 { 00:04:45.013 "subsystem": "vhost_blk", 00:04:45.014 "config": [] 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "subsystem": "ublk", 00:04:45.014 "config": [] 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "subsystem": "nbd", 00:04:45.014 "config": [] 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "subsystem": "nvmf", 00:04:45.014 "config": [ 00:04:45.014 { 00:04:45.014 "method": "nvmf_set_config", 00:04:45.014 "params": { 00:04:45.014 "discovery_filter": "match_any", 00:04:45.014 "admin_cmd_passthru": { 00:04:45.014 "identify_ctrlr": false 00:04:45.014 }, 00:04:45.014 "dhchap_digests": [ 00:04:45.014 "sha256", 00:04:45.014 "sha384", 00:04:45.014 "sha512" 00:04:45.014 ], 00:04:45.014 "dhchap_dhgroups": [ 00:04:45.014 "null", 00:04:45.014 "ffdhe2048", 00:04:45.014 "ffdhe3072", 00:04:45.014 "ffdhe4096", 00:04:45.014 "ffdhe6144", 00:04:45.014 "ffdhe8192" 00:04:45.014 ] 00:04:45.014 } 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "method": "nvmf_set_max_subsystems", 00:04:45.014 "params": { 00:04:45.014 "max_subsystems": 1024 00:04:45.014 } 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "method": "nvmf_set_crdt", 00:04:45.014 "params": { 00:04:45.014 "crdt1": 0, 00:04:45.014 "crdt2": 0, 00:04:45.014 "crdt3": 0 00:04:45.014 } 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "method": "nvmf_create_transport", 00:04:45.014 "params": { 00:04:45.014 "trtype": "TCP", 00:04:45.014 "max_queue_depth": 128, 00:04:45.014 "max_io_qpairs_per_ctrlr": 127, 00:04:45.014 "in_capsule_data_size": 4096, 00:04:45.014 "max_io_size": 131072, 00:04:45.014 "io_unit_size": 131072, 00:04:45.014 "max_aq_depth": 128, 00:04:45.014 "num_shared_buffers": 511, 00:04:45.014 "buf_cache_size": 4294967295, 00:04:45.014 "dif_insert_or_strip": false, 00:04:45.014 "zcopy": false, 00:04:45.014 "c2h_success": true, 00:04:45.014 "sock_priority": 0, 00:04:45.014 "abort_timeout_sec": 1, 00:04:45.014 "ack_timeout": 0, 00:04:45.014 "data_wr_pool_size": 0 00:04:45.014 } 00:04:45.014 } 00:04:45.014 ] 00:04:45.014 }, 00:04:45.014 { 00:04:45.014 "subsystem": "iscsi", 00:04:45.014 "config": [ 00:04:45.014 { 00:04:45.014 "method": "iscsi_set_options", 00:04:45.014 "params": { 00:04:45.014 "node_base": "iqn.2016-06.io.spdk", 00:04:45.014 "max_sessions": 128, 00:04:45.014 "max_connections_per_session": 2, 00:04:45.014 "max_queue_depth": 64, 00:04:45.014 "default_time2wait": 2, 00:04:45.014 "default_time2retain": 20, 00:04:45.014 "first_burst_length": 8192, 00:04:45.014 "immediate_data": true, 00:04:45.014 "allow_duplicated_isid": false, 00:04:45.014 "error_recovery_level": 0, 00:04:45.014 "nop_timeout": 60, 00:04:45.014 "nop_in_interval": 30, 00:04:45.014 "disable_chap": false, 00:04:45.014 "require_chap": false, 00:04:45.014 "mutual_chap": false, 00:04:45.014 "chap_group": 0, 00:04:45.014 "max_large_datain_per_connection": 64, 00:04:45.014 "max_r2t_per_connection": 4, 00:04:45.014 "pdu_pool_size": 36864, 00:04:45.014 "immediate_data_pool_size": 16384, 00:04:45.014 "data_out_pool_size": 2048 00:04:45.014 } 00:04:45.014 } 00:04:45.014 ] 00:04:45.014 } 00:04:45.014 ] 00:04:45.014 } 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58103 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58103 ']' 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58103 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.014 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58103 00:04:45.272 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.272 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.272 killing process with pid 58103 00:04:45.272 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58103' 00:04:45.272 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58103 00:04:45.272 17:39:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58103 00:04:47.808 17:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58159 00:04:47.808 17:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.808 17:39:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58159 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58159 ']' 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58159 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58159 00:04:53.084 killing process with pid 58159 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58159' 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58159 00:04:53.084 17:39:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58159 00:04:54.985 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.986 17:39:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:54.986 00:04:54.986 real 0m11.457s 00:04:54.986 user 0m10.877s 00:04:54.986 sys 0m0.965s 00:04:54.986 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.986 17:39:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.986 ************************************ 00:04:54.986 END TEST skip_rpc_with_json 00:04:54.986 ************************************ 00:04:55.247 17:39:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.247 17:39:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.247 17:39:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.247 17:39:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.247 ************************************ 00:04:55.247 START TEST skip_rpc_with_delay 00:04:55.247 ************************************ 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.247 [2024-11-20 17:39:22.327464] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.247 ************************************ 00:04:55.247 END TEST skip_rpc_with_delay 00:04:55.247 ************************************ 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:55.247 00:04:55.247 real 0m0.167s 00:04:55.247 user 0m0.077s 00:04:55.247 sys 0m0.088s 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.247 17:39:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.512 17:39:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.512 17:39:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.512 17:39:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.512 17:39:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.512 17:39:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.512 17:39:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.512 ************************************ 00:04:55.512 START TEST exit_on_failed_rpc_init 00:04:55.512 ************************************ 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58298 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58298 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58298 ']' 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.512 17:39:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.512 [2024-11-20 17:39:22.577902] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:55.512 [2024-11-20 17:39:22.578372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58298 ] 00:04:55.771 [2024-11-20 17:39:22.764302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.771 [2024-11-20 17:39:22.883205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:56.706 17:39:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.964 [2024-11-20 17:39:23.897095] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:04:56.964 [2024-11-20 17:39:23.897218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:04:56.964 [2024-11-20 17:39:24.081878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.237 [2024-11-20 17:39:24.201468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.237 [2024-11-20 17:39:24.201585] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:57.237 [2024-11-20 17:39:24.201603] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:57.237 [2024-11-20 17:39:24.201625] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58298 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58298 ']' 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58298 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58298 00:04:57.496 killing process with pid 58298 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58298' 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58298 00:04:57.496 17:39:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58298 00:05:00.072 ************************************ 00:05:00.072 END TEST exit_on_failed_rpc_init 00:05:00.072 ************************************ 00:05:00.072 00:05:00.072 real 0m4.480s 00:05:00.072 user 0m4.787s 00:05:00.072 sys 0m0.650s 00:05:00.072 17:39:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.072 17:39:26 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:00.072 17:39:26 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:00.072 00:05:00.072 real 0m24.125s 00:05:00.072 user 0m22.939s 00:05:00.072 sys 0m2.445s 00:05:00.072 17:39:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.072 ************************************ 00:05:00.072 END TEST skip_rpc 00:05:00.072 ************************************ 00:05:00.072 17:39:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:00.072 17:39:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.072 17:39:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.072 17:39:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.072 17:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:00.072 ************************************ 00:05:00.072 START TEST rpc_client 00:05:00.072 ************************************ 00:05:00.072 17:39:27 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:00.072 * Looking for test storage... 00:05:00.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:00.072 17:39:27 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.072 17:39:27 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.072 17:39:27 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.331 17:39:27 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.331 17:39:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:00.331 17:39:27 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.331 17:39:27 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.332 --rc genhtml_branch_coverage=1 00:05:00.332 --rc genhtml_function_coverage=1 00:05:00.332 --rc genhtml_legend=1 00:05:00.332 --rc geninfo_all_blocks=1 00:05:00.332 --rc geninfo_unexecuted_blocks=1 00:05:00.332 00:05:00.332 ' 00:05:00.332 17:39:27 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.332 --rc genhtml_branch_coverage=1 00:05:00.332 --rc genhtml_function_coverage=1 00:05:00.332 --rc genhtml_legend=1 00:05:00.332 --rc geninfo_all_blocks=1 00:05:00.332 --rc geninfo_unexecuted_blocks=1 00:05:00.332 00:05:00.332 ' 00:05:00.332 17:39:27 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.332 --rc genhtml_branch_coverage=1 00:05:00.332 --rc genhtml_function_coverage=1 00:05:00.332 --rc genhtml_legend=1 00:05:00.332 --rc geninfo_all_blocks=1 00:05:00.332 --rc geninfo_unexecuted_blocks=1 00:05:00.332 00:05:00.332 ' 00:05:00.332 17:39:27 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.332 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.332 --rc genhtml_branch_coverage=1 00:05:00.332 --rc genhtml_function_coverage=1 00:05:00.332 --rc genhtml_legend=1 00:05:00.332 --rc geninfo_all_blocks=1 00:05:00.332 --rc geninfo_unexecuted_blocks=1 00:05:00.332 00:05:00.332 ' 00:05:00.332 17:39:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:00.332 OK 00:05:00.332 17:39:27 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:00.332 00:05:00.332 real 0m0.317s 00:05:00.332 user 0m0.164s 00:05:00.332 sys 0m0.167s 00:05:00.332 17:39:27 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.332 17:39:27 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:00.332 ************************************ 00:05:00.332 END TEST rpc_client 00:05:00.332 ************************************ 00:05:00.332 17:39:27 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.332 17:39:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.332 17:39:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.332 17:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:00.332 ************************************ 00:05:00.332 START TEST json_config 00:05:00.332 ************************************ 00:05:00.332 17:39:27 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.591 17:39:27 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.591 17:39:27 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.591 17:39:27 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.591 17:39:27 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.591 17:39:27 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.591 17:39:27 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:00.591 17:39:27 json_config -- scripts/common.sh@345 -- # : 1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.591 17:39:27 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.591 17:39:27 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@353 -- # local d=1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.591 17:39:27 json_config -- scripts/common.sh@355 -- # echo 1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.591 17:39:27 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@353 -- # local d=2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.591 17:39:27 json_config -- scripts/common.sh@355 -- # echo 2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.591 17:39:27 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.591 17:39:27 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.591 17:39:27 json_config -- scripts/common.sh@368 -- # return 0 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.591 --rc genhtml_branch_coverage=1 00:05:00.591 --rc genhtml_function_coverage=1 00:05:00.591 --rc genhtml_legend=1 00:05:00.591 --rc geninfo_all_blocks=1 00:05:00.591 --rc geninfo_unexecuted_blocks=1 00:05:00.591 00:05:00.591 ' 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.591 --rc genhtml_branch_coverage=1 00:05:00.591 --rc genhtml_function_coverage=1 00:05:00.591 --rc genhtml_legend=1 00:05:00.591 --rc geninfo_all_blocks=1 00:05:00.591 --rc geninfo_unexecuted_blocks=1 00:05:00.591 00:05:00.591 ' 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.591 --rc genhtml_branch_coverage=1 00:05:00.591 --rc genhtml_function_coverage=1 00:05:00.591 --rc genhtml_legend=1 00:05:00.591 --rc geninfo_all_blocks=1 00:05:00.591 --rc geninfo_unexecuted_blocks=1 00:05:00.591 00:05:00.591 ' 00:05:00.591 17:39:27 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.591 --rc genhtml_branch_coverage=1 00:05:00.591 --rc genhtml_function_coverage=1 00:05:00.591 --rc genhtml_legend=1 00:05:00.591 --rc geninfo_all_blocks=1 00:05:00.591 --rc geninfo_unexecuted_blocks=1 00:05:00.591 00:05:00.591 ' 00:05:00.591 17:39:27 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7142c3e-fe33-4b72-b423-d576a444e09d 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=f7142c3e-fe33-4b72-b423-d576a444e09d 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.591 17:39:27 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.591 17:39:27 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.591 17:39:27 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.591 17:39:27 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.591 17:39:27 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.591 17:39:27 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.591 17:39:27 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.591 17:39:27 json_config -- paths/export.sh@5 -- # export PATH 00:05:00.591 17:39:27 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@51 -- # : 0 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.591 17:39:27 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.592 17:39:27 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.592 17:39:27 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.592 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.592 17:39:27 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.592 17:39:27 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.592 17:39:27 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:00.592 WARNING: No tests are enabled so not running JSON configuration tests 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:00.592 17:39:27 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:00.592 ************************************ 00:05:00.592 END TEST json_config 00:05:00.592 ************************************ 00:05:00.592 00:05:00.592 real 0m0.208s 00:05:00.592 user 0m0.120s 00:05:00.592 sys 0m0.097s 00:05:00.592 17:39:27 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:00.592 17:39:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:00.592 17:39:27 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.592 17:39:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:00.592 17:39:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:00.592 17:39:27 -- common/autotest_common.sh@10 -- # set +x 00:05:00.592 ************************************ 00:05:00.592 START TEST json_config_extra_key 00:05:00.592 ************************************ 00:05:00.592 17:39:27 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:00.852 17:39:27 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:00.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.852 --rc genhtml_branch_coverage=1 00:05:00.852 --rc genhtml_function_coverage=1 00:05:00.852 --rc genhtml_legend=1 00:05:00.852 --rc geninfo_all_blocks=1 00:05:00.852 --rc geninfo_unexecuted_blocks=1 00:05:00.852 00:05:00.852 ' 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:00.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.852 --rc genhtml_branch_coverage=1 00:05:00.852 --rc genhtml_function_coverage=1 00:05:00.852 --rc genhtml_legend=1 00:05:00.852 --rc geninfo_all_blocks=1 00:05:00.852 --rc geninfo_unexecuted_blocks=1 00:05:00.852 00:05:00.852 ' 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:00.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.852 --rc genhtml_branch_coverage=1 00:05:00.852 --rc genhtml_function_coverage=1 00:05:00.852 --rc genhtml_legend=1 00:05:00.852 --rc geninfo_all_blocks=1 00:05:00.852 --rc geninfo_unexecuted_blocks=1 00:05:00.852 00:05:00.852 ' 00:05:00.852 17:39:27 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:00.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:00.852 --rc genhtml_branch_coverage=1 00:05:00.852 --rc genhtml_function_coverage=1 00:05:00.852 --rc genhtml_legend=1 00:05:00.852 --rc geninfo_all_blocks=1 00:05:00.852 --rc geninfo_unexecuted_blocks=1 00:05:00.852 00:05:00.852 ' 00:05:00.852 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:f7142c3e-fe33-4b72-b423-d576a444e09d 00:05:00.852 17:39:27 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=f7142c3e-fe33-4b72-b423-d576a444e09d 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:00.853 17:39:27 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:00.853 17:39:27 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:00.853 17:39:27 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:00.853 17:39:27 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:00.853 17:39:27 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.853 17:39:27 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.853 17:39:27 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.853 17:39:27 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:00.853 17:39:27 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:00.853 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:00.853 17:39:27 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:00.853 INFO: launching applications... 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:00.853 17:39:27 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58526 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:00.853 Waiting for target to run... 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58526 /var/tmp/spdk_tgt.sock 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58526 ']' 00:05:00.853 17:39:27 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.853 17:39:27 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:01.112 [2024-11-20 17:39:28.070661] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:01.112 [2024-11-20 17:39:28.070803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58526 ] 00:05:01.372 [2024-11-20 17:39:28.462393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.631 [2024-11-20 17:39:28.569667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.570 17:39:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.570 17:39:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:02.570 00:05:02.570 INFO: shutting down applications... 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:02.570 17:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:02.570 17:39:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58526 ]] 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58526 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:02.570 17:39:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:02.831 17:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:02.831 17:39:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:02.831 17:39:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:02.831 17:39:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.399 17:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.399 17:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.399 17:39:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:03.399 17:39:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:03.965 17:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:03.965 17:39:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:03.965 17:39:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:03.965 17:39:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.533 17:39:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.533 17:39:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.533 17:39:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:04.533 17:39:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:04.792 17:39:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:04.792 17:39:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.792 17:39:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:04.792 17:39:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58526 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.361 17:39:32 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.361 SPDK target shutdown done 00:05:05.361 Success 00:05:05.361 17:39:32 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:05.361 ************************************ 00:05:05.361 END TEST json_config_extra_key 00:05:05.361 ************************************ 00:05:05.361 00:05:05.361 real 0m4.693s 00:05:05.361 user 0m4.184s 00:05:05.361 sys 0m0.607s 00:05:05.361 17:39:32 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.361 17:39:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.361 17:39:32 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.361 17:39:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.361 17:39:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.361 17:39:32 -- common/autotest_common.sh@10 -- # set +x 00:05:05.361 ************************************ 00:05:05.361 START TEST alias_rpc 00:05:05.361 ************************************ 00:05:05.361 17:39:32 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:05.620 * Looking for test storage... 00:05:05.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.620 17:39:32 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.620 --rc genhtml_branch_coverage=1 00:05:05.620 --rc genhtml_function_coverage=1 00:05:05.620 --rc genhtml_legend=1 00:05:05.620 --rc geninfo_all_blocks=1 00:05:05.620 --rc geninfo_unexecuted_blocks=1 00:05:05.620 00:05:05.620 ' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.620 --rc genhtml_branch_coverage=1 00:05:05.620 --rc genhtml_function_coverage=1 00:05:05.620 --rc genhtml_legend=1 00:05:05.620 --rc geninfo_all_blocks=1 00:05:05.620 --rc geninfo_unexecuted_blocks=1 00:05:05.620 00:05:05.620 ' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.620 --rc genhtml_branch_coverage=1 00:05:05.620 --rc genhtml_function_coverage=1 00:05:05.620 --rc genhtml_legend=1 00:05:05.620 --rc geninfo_all_blocks=1 00:05:05.620 --rc geninfo_unexecuted_blocks=1 00:05:05.620 00:05:05.620 ' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.620 --rc genhtml_branch_coverage=1 00:05:05.620 --rc genhtml_function_coverage=1 00:05:05.620 --rc genhtml_legend=1 00:05:05.620 --rc geninfo_all_blocks=1 00:05:05.620 --rc geninfo_unexecuted_blocks=1 00:05:05.620 00:05:05.620 ' 00:05:05.620 17:39:32 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.620 17:39:32 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58638 00:05:05.620 17:39:32 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:05.620 17:39:32 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58638 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58638 ']' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.620 17:39:32 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.880 [2024-11-20 17:39:32.832151] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:05.880 [2024-11-20 17:39:32.832302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58638 ] 00:05:05.880 [2024-11-20 17:39:33.015520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.139 [2024-11-20 17:39:33.126312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.137 17:39:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:07.137 17:39:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58638 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58638 ']' 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58638 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:07.137 17:39:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58638 00:05:07.396 17:39:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:07.396 17:39:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:07.396 killing process with pid 58638 00:05:07.396 17:39:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58638' 00:05:07.396 17:39:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 58638 00:05:07.396 17:39:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 58638 00:05:09.931 00:05:09.931 real 0m4.259s 00:05:09.931 user 0m4.211s 00:05:09.931 sys 0m0.637s 00:05:09.931 17:39:36 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.931 17:39:36 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.931 ************************************ 00:05:09.931 END TEST alias_rpc 00:05:09.931 ************************************ 00:05:09.931 17:39:36 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.931 17:39:36 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.931 17:39:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.931 17:39:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.931 17:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:09.931 ************************************ 00:05:09.931 START TEST spdkcli_tcp 00:05:09.931 ************************************ 00:05:09.931 17:39:36 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.931 * Looking for test storage... 00:05:09.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:09.931 17:39:36 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:09.931 17:39:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:09.931 17:39:36 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:09.931 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.931 17:39:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.931 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.931 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.931 --rc genhtml_branch_coverage=1 00:05:09.931 --rc genhtml_function_coverage=1 00:05:09.931 --rc genhtml_legend=1 00:05:09.931 --rc geninfo_all_blocks=1 00:05:09.931 --rc geninfo_unexecuted_blocks=1 00:05:09.931 00:05:09.931 ' 00:05:09.931 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.931 --rc genhtml_branch_coverage=1 00:05:09.931 --rc genhtml_function_coverage=1 00:05:09.931 --rc genhtml_legend=1 00:05:09.931 --rc geninfo_all_blocks=1 00:05:09.931 --rc geninfo_unexecuted_blocks=1 00:05:09.931 00:05:09.931 ' 00:05:09.931 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:09.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.931 --rc genhtml_branch_coverage=1 00:05:09.931 --rc genhtml_function_coverage=1 00:05:09.931 --rc genhtml_legend=1 00:05:09.931 --rc geninfo_all_blocks=1 00:05:09.931 --rc geninfo_unexecuted_blocks=1 00:05:09.931 00:05:09.931 ' 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:09.932 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.932 --rc genhtml_branch_coverage=1 00:05:09.932 --rc genhtml_function_coverage=1 00:05:09.932 --rc genhtml_legend=1 00:05:09.932 --rc geninfo_all_blocks=1 00:05:09.932 --rc geninfo_unexecuted_blocks=1 00:05:09.932 00:05:09.932 ' 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58745 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:09.932 17:39:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58745 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58745 ']' 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.932 17:39:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:10.192 [2024-11-20 17:39:37.156814] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:10.192 [2024-11-20 17:39:37.156940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58745 ] 00:05:10.192 [2024-11-20 17:39:37.338517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.451 [2024-11-20 17:39:37.458537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.451 [2024-11-20 17:39:37.458570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.389 17:39:38 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.389 17:39:38 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:11.389 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58767 00:05:11.389 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.389 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.389 [ 00:05:11.389 "bdev_malloc_delete", 00:05:11.389 "bdev_malloc_create", 00:05:11.389 "bdev_null_resize", 00:05:11.389 "bdev_null_delete", 00:05:11.389 "bdev_null_create", 00:05:11.389 "bdev_nvme_cuse_unregister", 00:05:11.389 "bdev_nvme_cuse_register", 00:05:11.389 "bdev_opal_new_user", 00:05:11.389 "bdev_opal_set_lock_state", 00:05:11.389 "bdev_opal_delete", 00:05:11.389 "bdev_opal_get_info", 00:05:11.389 "bdev_opal_create", 00:05:11.389 "bdev_nvme_opal_revert", 00:05:11.389 "bdev_nvme_opal_init", 00:05:11.389 "bdev_nvme_send_cmd", 00:05:11.389 "bdev_nvme_set_keys", 00:05:11.389 "bdev_nvme_get_path_iostat", 00:05:11.389 "bdev_nvme_get_mdns_discovery_info", 00:05:11.389 "bdev_nvme_stop_mdns_discovery", 00:05:11.389 "bdev_nvme_start_mdns_discovery", 00:05:11.389 "bdev_nvme_set_multipath_policy", 00:05:11.389 "bdev_nvme_set_preferred_path", 00:05:11.389 "bdev_nvme_get_io_paths", 00:05:11.389 "bdev_nvme_remove_error_injection", 00:05:11.389 "bdev_nvme_add_error_injection", 00:05:11.389 "bdev_nvme_get_discovery_info", 00:05:11.389 "bdev_nvme_stop_discovery", 00:05:11.389 "bdev_nvme_start_discovery", 00:05:11.389 "bdev_nvme_get_controller_health_info", 00:05:11.389 "bdev_nvme_disable_controller", 00:05:11.389 "bdev_nvme_enable_controller", 00:05:11.389 "bdev_nvme_reset_controller", 00:05:11.389 "bdev_nvme_get_transport_statistics", 00:05:11.389 "bdev_nvme_apply_firmware", 00:05:11.389 "bdev_nvme_detach_controller", 00:05:11.389 "bdev_nvme_get_controllers", 00:05:11.389 "bdev_nvme_attach_controller", 00:05:11.389 "bdev_nvme_set_hotplug", 00:05:11.389 "bdev_nvme_set_options", 00:05:11.389 "bdev_passthru_delete", 00:05:11.389 "bdev_passthru_create", 00:05:11.389 "bdev_lvol_set_parent_bdev", 00:05:11.389 "bdev_lvol_set_parent", 00:05:11.389 "bdev_lvol_check_shallow_copy", 00:05:11.389 "bdev_lvol_start_shallow_copy", 00:05:11.389 "bdev_lvol_grow_lvstore", 00:05:11.389 "bdev_lvol_get_lvols", 00:05:11.389 "bdev_lvol_get_lvstores", 00:05:11.389 "bdev_lvol_delete", 00:05:11.389 "bdev_lvol_set_read_only", 00:05:11.389 "bdev_lvol_resize", 00:05:11.389 "bdev_lvol_decouple_parent", 00:05:11.389 "bdev_lvol_inflate", 00:05:11.389 "bdev_lvol_rename", 00:05:11.389 "bdev_lvol_clone_bdev", 00:05:11.389 "bdev_lvol_clone", 00:05:11.389 "bdev_lvol_snapshot", 00:05:11.389 "bdev_lvol_create", 00:05:11.389 "bdev_lvol_delete_lvstore", 00:05:11.389 "bdev_lvol_rename_lvstore", 00:05:11.389 "bdev_lvol_create_lvstore", 00:05:11.389 "bdev_raid_set_options", 00:05:11.389 "bdev_raid_remove_base_bdev", 00:05:11.389 "bdev_raid_add_base_bdev", 00:05:11.389 "bdev_raid_delete", 00:05:11.389 "bdev_raid_create", 00:05:11.389 "bdev_raid_get_bdevs", 00:05:11.389 "bdev_error_inject_error", 00:05:11.389 "bdev_error_delete", 00:05:11.389 "bdev_error_create", 00:05:11.389 "bdev_split_delete", 00:05:11.389 "bdev_split_create", 00:05:11.390 "bdev_delay_delete", 00:05:11.390 "bdev_delay_create", 00:05:11.390 "bdev_delay_update_latency", 00:05:11.390 "bdev_zone_block_delete", 00:05:11.390 "bdev_zone_block_create", 00:05:11.390 "blobfs_create", 00:05:11.390 "blobfs_detect", 00:05:11.390 "blobfs_set_cache_size", 00:05:11.390 "bdev_xnvme_delete", 00:05:11.390 "bdev_xnvme_create", 00:05:11.390 "bdev_aio_delete", 00:05:11.390 "bdev_aio_rescan", 00:05:11.390 "bdev_aio_create", 00:05:11.390 "bdev_ftl_set_property", 00:05:11.390 "bdev_ftl_get_properties", 00:05:11.390 "bdev_ftl_get_stats", 00:05:11.390 "bdev_ftl_unmap", 00:05:11.390 "bdev_ftl_unload", 00:05:11.390 "bdev_ftl_delete", 00:05:11.390 "bdev_ftl_load", 00:05:11.390 "bdev_ftl_create", 00:05:11.390 "bdev_virtio_attach_controller", 00:05:11.390 "bdev_virtio_scsi_get_devices", 00:05:11.390 "bdev_virtio_detach_controller", 00:05:11.390 "bdev_virtio_blk_set_hotplug", 00:05:11.390 "bdev_iscsi_delete", 00:05:11.390 "bdev_iscsi_create", 00:05:11.390 "bdev_iscsi_set_options", 00:05:11.390 "accel_error_inject_error", 00:05:11.390 "ioat_scan_accel_module", 00:05:11.390 "dsa_scan_accel_module", 00:05:11.390 "iaa_scan_accel_module", 00:05:11.390 "keyring_file_remove_key", 00:05:11.390 "keyring_file_add_key", 00:05:11.390 "keyring_linux_set_options", 00:05:11.390 "fsdev_aio_delete", 00:05:11.390 "fsdev_aio_create", 00:05:11.390 "iscsi_get_histogram", 00:05:11.390 "iscsi_enable_histogram", 00:05:11.390 "iscsi_set_options", 00:05:11.390 "iscsi_get_auth_groups", 00:05:11.390 "iscsi_auth_group_remove_secret", 00:05:11.390 "iscsi_auth_group_add_secret", 00:05:11.390 "iscsi_delete_auth_group", 00:05:11.390 "iscsi_create_auth_group", 00:05:11.390 "iscsi_set_discovery_auth", 00:05:11.390 "iscsi_get_options", 00:05:11.390 "iscsi_target_node_request_logout", 00:05:11.390 "iscsi_target_node_set_redirect", 00:05:11.390 "iscsi_target_node_set_auth", 00:05:11.390 "iscsi_target_node_add_lun", 00:05:11.390 "iscsi_get_stats", 00:05:11.390 "iscsi_get_connections", 00:05:11.390 "iscsi_portal_group_set_auth", 00:05:11.390 "iscsi_start_portal_group", 00:05:11.390 "iscsi_delete_portal_group", 00:05:11.390 "iscsi_create_portal_group", 00:05:11.390 "iscsi_get_portal_groups", 00:05:11.390 "iscsi_delete_target_node", 00:05:11.390 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.390 "iscsi_target_node_add_pg_ig_maps", 00:05:11.390 "iscsi_create_target_node", 00:05:11.390 "iscsi_get_target_nodes", 00:05:11.390 "iscsi_delete_initiator_group", 00:05:11.390 "iscsi_initiator_group_remove_initiators", 00:05:11.390 "iscsi_initiator_group_add_initiators", 00:05:11.390 "iscsi_create_initiator_group", 00:05:11.390 "iscsi_get_initiator_groups", 00:05:11.390 "nvmf_set_crdt", 00:05:11.390 "nvmf_set_config", 00:05:11.390 "nvmf_set_max_subsystems", 00:05:11.390 "nvmf_stop_mdns_prr", 00:05:11.390 "nvmf_publish_mdns_prr", 00:05:11.390 "nvmf_subsystem_get_listeners", 00:05:11.390 "nvmf_subsystem_get_qpairs", 00:05:11.390 "nvmf_subsystem_get_controllers", 00:05:11.390 "nvmf_get_stats", 00:05:11.390 "nvmf_get_transports", 00:05:11.390 "nvmf_create_transport", 00:05:11.390 "nvmf_get_targets", 00:05:11.390 "nvmf_delete_target", 00:05:11.390 "nvmf_create_target", 00:05:11.390 "nvmf_subsystem_allow_any_host", 00:05:11.390 "nvmf_subsystem_set_keys", 00:05:11.390 "nvmf_subsystem_remove_host", 00:05:11.390 "nvmf_subsystem_add_host", 00:05:11.390 "nvmf_ns_remove_host", 00:05:11.390 "nvmf_ns_add_host", 00:05:11.390 "nvmf_subsystem_remove_ns", 00:05:11.390 "nvmf_subsystem_set_ns_ana_group", 00:05:11.390 "nvmf_subsystem_add_ns", 00:05:11.390 "nvmf_subsystem_listener_set_ana_state", 00:05:11.390 "nvmf_discovery_get_referrals", 00:05:11.390 "nvmf_discovery_remove_referral", 00:05:11.390 "nvmf_discovery_add_referral", 00:05:11.390 "nvmf_subsystem_remove_listener", 00:05:11.390 "nvmf_subsystem_add_listener", 00:05:11.390 "nvmf_delete_subsystem", 00:05:11.390 "nvmf_create_subsystem", 00:05:11.390 "nvmf_get_subsystems", 00:05:11.390 "env_dpdk_get_mem_stats", 00:05:11.390 "nbd_get_disks", 00:05:11.390 "nbd_stop_disk", 00:05:11.390 "nbd_start_disk", 00:05:11.390 "ublk_recover_disk", 00:05:11.390 "ublk_get_disks", 00:05:11.390 "ublk_stop_disk", 00:05:11.390 "ublk_start_disk", 00:05:11.390 "ublk_destroy_target", 00:05:11.390 "ublk_create_target", 00:05:11.390 "virtio_blk_create_transport", 00:05:11.390 "virtio_blk_get_transports", 00:05:11.390 "vhost_controller_set_coalescing", 00:05:11.390 "vhost_get_controllers", 00:05:11.390 "vhost_delete_controller", 00:05:11.390 "vhost_create_blk_controller", 00:05:11.390 "vhost_scsi_controller_remove_target", 00:05:11.390 "vhost_scsi_controller_add_target", 00:05:11.390 "vhost_start_scsi_controller", 00:05:11.390 "vhost_create_scsi_controller", 00:05:11.390 "thread_set_cpumask", 00:05:11.390 "scheduler_set_options", 00:05:11.390 "framework_get_governor", 00:05:11.390 "framework_get_scheduler", 00:05:11.390 "framework_set_scheduler", 00:05:11.390 "framework_get_reactors", 00:05:11.390 "thread_get_io_channels", 00:05:11.390 "thread_get_pollers", 00:05:11.390 "thread_get_stats", 00:05:11.390 "framework_monitor_context_switch", 00:05:11.390 "spdk_kill_instance", 00:05:11.390 "log_enable_timestamps", 00:05:11.390 "log_get_flags", 00:05:11.390 "log_clear_flag", 00:05:11.390 "log_set_flag", 00:05:11.390 "log_get_level", 00:05:11.390 "log_set_level", 00:05:11.390 "log_get_print_level", 00:05:11.390 "log_set_print_level", 00:05:11.390 "framework_enable_cpumask_locks", 00:05:11.390 "framework_disable_cpumask_locks", 00:05:11.390 "framework_wait_init", 00:05:11.390 "framework_start_init", 00:05:11.390 "scsi_get_devices", 00:05:11.390 "bdev_get_histogram", 00:05:11.390 "bdev_enable_histogram", 00:05:11.390 "bdev_set_qos_limit", 00:05:11.390 "bdev_set_qd_sampling_period", 00:05:11.390 "bdev_get_bdevs", 00:05:11.390 "bdev_reset_iostat", 00:05:11.390 "bdev_get_iostat", 00:05:11.390 "bdev_examine", 00:05:11.390 "bdev_wait_for_examine", 00:05:11.390 "bdev_set_options", 00:05:11.390 "accel_get_stats", 00:05:11.390 "accel_set_options", 00:05:11.390 "accel_set_driver", 00:05:11.390 "accel_crypto_key_destroy", 00:05:11.390 "accel_crypto_keys_get", 00:05:11.390 "accel_crypto_key_create", 00:05:11.390 "accel_assign_opc", 00:05:11.390 "accel_get_module_info", 00:05:11.390 "accel_get_opc_assignments", 00:05:11.390 "vmd_rescan", 00:05:11.390 "vmd_remove_device", 00:05:11.390 "vmd_enable", 00:05:11.390 "sock_get_default_impl", 00:05:11.390 "sock_set_default_impl", 00:05:11.390 "sock_impl_set_options", 00:05:11.390 "sock_impl_get_options", 00:05:11.390 "iobuf_get_stats", 00:05:11.390 "iobuf_set_options", 00:05:11.390 "keyring_get_keys", 00:05:11.390 "framework_get_pci_devices", 00:05:11.390 "framework_get_config", 00:05:11.390 "framework_get_subsystems", 00:05:11.390 "fsdev_set_opts", 00:05:11.390 "fsdev_get_opts", 00:05:11.390 "trace_get_info", 00:05:11.390 "trace_get_tpoint_group_mask", 00:05:11.390 "trace_disable_tpoint_group", 00:05:11.390 "trace_enable_tpoint_group", 00:05:11.390 "trace_clear_tpoint_mask", 00:05:11.390 "trace_set_tpoint_mask", 00:05:11.390 "notify_get_notifications", 00:05:11.390 "notify_get_types", 00:05:11.390 "spdk_get_version", 00:05:11.390 "rpc_get_methods" 00:05:11.390 ] 00:05:11.650 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.650 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.650 17:39:38 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58745 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58745 ']' 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58745 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58745 00:05:11.650 killing process with pid 58745 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58745' 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58745 00:05:11.650 17:39:38 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58745 00:05:14.213 00:05:14.214 real 0m4.246s 00:05:14.214 user 0m7.550s 00:05:14.214 sys 0m0.661s 00:05:14.214 ************************************ 00:05:14.214 END TEST spdkcli_tcp 00:05:14.214 ************************************ 00:05:14.214 17:39:41 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.214 17:39:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 17:39:41 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.214 17:39:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.214 17:39:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.214 17:39:41 -- common/autotest_common.sh@10 -- # set +x 00:05:14.214 ************************************ 00:05:14.214 START TEST dpdk_mem_utility 00:05:14.214 ************************************ 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:14.214 * Looking for test storage... 00:05:14.214 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.214 17:39:41 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:14.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.214 --rc genhtml_branch_coverage=1 00:05:14.214 --rc genhtml_function_coverage=1 00:05:14.214 --rc genhtml_legend=1 00:05:14.214 --rc geninfo_all_blocks=1 00:05:14.214 --rc geninfo_unexecuted_blocks=1 00:05:14.214 00:05:14.214 ' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:14.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.214 --rc genhtml_branch_coverage=1 00:05:14.214 --rc genhtml_function_coverage=1 00:05:14.214 --rc genhtml_legend=1 00:05:14.214 --rc geninfo_all_blocks=1 00:05:14.214 --rc geninfo_unexecuted_blocks=1 00:05:14.214 00:05:14.214 ' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:14.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.214 --rc genhtml_branch_coverage=1 00:05:14.214 --rc genhtml_function_coverage=1 00:05:14.214 --rc genhtml_legend=1 00:05:14.214 --rc geninfo_all_blocks=1 00:05:14.214 --rc geninfo_unexecuted_blocks=1 00:05:14.214 00:05:14.214 ' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:14.214 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.214 --rc genhtml_branch_coverage=1 00:05:14.214 --rc genhtml_function_coverage=1 00:05:14.214 --rc genhtml_legend=1 00:05:14.214 --rc geninfo_all_blocks=1 00:05:14.214 --rc geninfo_unexecuted_blocks=1 00:05:14.214 00:05:14.214 ' 00:05:14.214 17:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:14.214 17:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58871 00:05:14.214 17:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.214 17:39:41 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58871 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58871 ']' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.214 17:39:41 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:14.473 [2024-11-20 17:39:41.467200] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:14.473 [2024-11-20 17:39:41.467335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58871 ] 00:05:14.473 [2024-11-20 17:39:41.639665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.731 [2024-11-20 17:39:41.759685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.739 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.739 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:15.739 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:15.739 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:15.739 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.739 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.739 { 00:05:15.739 "filename": "/tmp/spdk_mem_dump.txt" 00:05:15.739 } 00:05:15.739 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.739 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:15.739 DPDK memory size 824.000000 MiB in 1 heap(s) 00:05:15.739 1 heaps totaling size 824.000000 MiB 00:05:15.739 size: 824.000000 MiB heap id: 0 00:05:15.739 end heaps---------- 00:05:15.739 9 mempools totaling size 603.782043 MiB 00:05:15.739 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:15.739 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:15.739 size: 100.555481 MiB name: bdev_io_58871 00:05:15.739 size: 50.003479 MiB name: msgpool_58871 00:05:15.739 size: 36.509338 MiB name: fsdev_io_58871 00:05:15.739 size: 21.763794 MiB name: PDU_Pool 00:05:15.739 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:15.739 size: 4.133484 MiB name: evtpool_58871 00:05:15.739 size: 0.026123 MiB name: Session_Pool 00:05:15.739 end mempools------- 00:05:15.739 6 memzones totaling size 4.142822 MiB 00:05:15.739 size: 1.000366 MiB name: RG_ring_0_58871 00:05:15.739 size: 1.000366 MiB name: RG_ring_1_58871 00:05:15.739 size: 1.000366 MiB name: RG_ring_4_58871 00:05:15.739 size: 1.000366 MiB name: RG_ring_5_58871 00:05:15.739 size: 0.125366 MiB name: RG_ring_2_58871 00:05:15.739 size: 0.015991 MiB name: RG_ring_3_58871 00:05:15.739 end memzones------- 00:05:15.739 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:15.739 heap id: 0 total size: 824.000000 MiB number of busy elements: 324 number of free elements: 18 00:05:15.739 list of free elements. size: 16.779175 MiB 00:05:15.740 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:15.740 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:15.740 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:15.740 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:15.740 element at address: 0x200019900040 with size: 0.999939 MiB 00:05:15.740 element at address: 0x200019a00000 with size: 0.999084 MiB 00:05:15.740 element at address: 0x200032600000 with size: 0.994324 MiB 00:05:15.740 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:15.740 element at address: 0x200019200000 with size: 0.959656 MiB 00:05:15.740 element at address: 0x200019d00040 with size: 0.936401 MiB 00:05:15.740 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:15.740 element at address: 0x20001b400000 with size: 0.560242 MiB 00:05:15.740 element at address: 0x200000c00000 with size: 0.489197 MiB 00:05:15.740 element at address: 0x200019600000 with size: 0.488220 MiB 00:05:15.740 element at address: 0x200019e00000 with size: 0.485413 MiB 00:05:15.740 element at address: 0x200012c00000 with size: 0.433472 MiB 00:05:15.740 element at address: 0x200028800000 with size: 0.390442 MiB 00:05:15.740 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:15.740 list of standard malloc elements. size: 199.289917 MiB 00:05:15.740 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:15.740 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:15.740 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:15.740 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:15.740 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:05:15.740 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:15.740 element at address: 0x200019deff40 with size: 0.062683 MiB 00:05:15.740 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:15.740 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:15.740 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:05:15.740 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:15.740 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:05:15.740 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:05:15.740 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:05:15.740 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200019affc40 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48f6c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48f7c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48f8c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200028863f40 with size: 0.000244 MiB 00:05:15.741 element at address: 0x200028864040 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886af80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b080 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b180 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b280 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b380 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b480 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b580 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b680 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b780 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b880 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886b980 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886be80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c080 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c180 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c280 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c380 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c480 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c580 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c680 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c780 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c880 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886c980 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886d080 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886d180 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886d280 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886d380 with size: 0.000244 MiB 00:05:15.741 element at address: 0x20002886d480 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886d580 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886d680 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886d780 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886d880 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886d980 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886da80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886db80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886de80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886df80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e080 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e180 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e280 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e380 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e480 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e580 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e680 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e780 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e880 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886e980 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f080 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f180 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f280 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f380 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f480 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f580 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f680 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f780 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f880 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886f980 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:05:15.742 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:05:15.742 list of memzone associated elements. size: 607.930908 MiB 00:05:15.742 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:05:15.742 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:15.742 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:05:15.742 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:15.742 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:05:15.742 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_58871_0 00:05:15.742 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:15.742 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58871_0 00:05:15.742 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:15.742 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58871_0 00:05:15.742 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:05:15.742 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:15.742 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:05:15.742 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:15.742 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:15.742 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58871_0 00:05:15.742 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:15.742 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58871 00:05:15.742 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:15.742 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58871 00:05:15.742 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:05:15.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:15.742 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:05:15.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:15.742 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:15.742 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:15.742 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:05:15.742 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:15.742 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:15.742 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58871 00:05:15.742 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:15.742 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58871 00:05:15.742 element at address: 0x200019affd40 with size: 1.000549 MiB 00:05:15.742 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58871 00:05:15.742 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:05:15.742 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58871 00:05:15.742 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:15.742 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58871 00:05:15.742 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:15.742 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58871 00:05:15.742 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:05:15.742 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:15.742 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:05:15.742 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:15.742 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:05:15.742 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:15.742 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:15.742 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58871 00:05:15.742 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:15.742 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58871 00:05:15.742 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:05:15.742 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:15.742 element at address: 0x200028864140 with size: 0.023804 MiB 00:05:15.742 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:15.742 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:15.742 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58871 00:05:15.742 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:05:15.742 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:15.742 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:15.742 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58871 00:05:15.742 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:15.742 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58871 00:05:15.742 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:15.742 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58871 00:05:15.742 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:05:15.742 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:15.742 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:15.742 17:39:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58871 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58871 ']' 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58871 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58871 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.742 killing process with pid 58871 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58871' 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58871 00:05:15.742 17:39:42 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58871 00:05:18.273 00:05:18.273 real 0m4.131s 00:05:18.273 user 0m4.042s 00:05:18.273 sys 0m0.579s 00:05:18.273 ************************************ 00:05:18.273 END TEST dpdk_mem_utility 00:05:18.273 ************************************ 00:05:18.273 17:39:45 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.273 17:39:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.273 17:39:45 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.273 17:39:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.273 17:39:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.273 17:39:45 -- common/autotest_common.sh@10 -- # set +x 00:05:18.273 ************************************ 00:05:18.273 START TEST event 00:05:18.273 ************************************ 00:05:18.273 17:39:45 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.532 * Looking for test storage... 00:05:18.532 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.532 17:39:45 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.532 17:39:45 event -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.532 17:39:45 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.532 17:39:45 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.532 17:39:45 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.532 17:39:45 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.532 17:39:45 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.532 17:39:45 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.532 17:39:45 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.532 17:39:45 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.532 17:39:45 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.532 17:39:45 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.532 17:39:45 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.532 17:39:45 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.532 17:39:45 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.532 17:39:45 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.532 17:39:45 event -- scripts/common.sh@345 -- # : 1 00:05:18.532 17:39:45 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.532 17:39:45 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.532 17:39:45 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.532 17:39:45 event -- scripts/common.sh@353 -- # local d=1 00:05:18.532 17:39:45 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.532 17:39:45 event -- scripts/common.sh@355 -- # echo 1 00:05:18.532 17:39:45 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.532 17:39:45 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.532 17:39:45 event -- scripts/common.sh@353 -- # local d=2 00:05:18.532 17:39:45 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.532 17:39:45 event -- scripts/common.sh@355 -- # echo 2 00:05:18.532 17:39:45 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.532 17:39:45 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.533 17:39:45 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.533 17:39:45 event -- scripts/common.sh@368 -- # return 0 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.533 --rc genhtml_branch_coverage=1 00:05:18.533 --rc genhtml_function_coverage=1 00:05:18.533 --rc genhtml_legend=1 00:05:18.533 --rc geninfo_all_blocks=1 00:05:18.533 --rc geninfo_unexecuted_blocks=1 00:05:18.533 00:05:18.533 ' 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.533 --rc genhtml_branch_coverage=1 00:05:18.533 --rc genhtml_function_coverage=1 00:05:18.533 --rc genhtml_legend=1 00:05:18.533 --rc geninfo_all_blocks=1 00:05:18.533 --rc geninfo_unexecuted_blocks=1 00:05:18.533 00:05:18.533 ' 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.533 --rc genhtml_branch_coverage=1 00:05:18.533 --rc genhtml_function_coverage=1 00:05:18.533 --rc genhtml_legend=1 00:05:18.533 --rc geninfo_all_blocks=1 00:05:18.533 --rc geninfo_unexecuted_blocks=1 00:05:18.533 00:05:18.533 ' 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.533 --rc genhtml_branch_coverage=1 00:05:18.533 --rc genhtml_function_coverage=1 00:05:18.533 --rc genhtml_legend=1 00:05:18.533 --rc geninfo_all_blocks=1 00:05:18.533 --rc geninfo_unexecuted_blocks=1 00:05:18.533 00:05:18.533 ' 00:05:18.533 17:39:45 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.533 17:39:45 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.533 17:39:45 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:18.533 17:39:45 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.533 17:39:45 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.533 ************************************ 00:05:18.533 START TEST event_perf 00:05:18.533 ************************************ 00:05:18.533 17:39:45 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.533 Running I/O for 1 seconds...[2024-11-20 17:39:45.621217] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:18.533 [2024-11-20 17:39:45.621335] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58979 ] 00:05:18.792 [2024-11-20 17:39:45.804428] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.792 [2024-11-20 17:39:45.931313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.792 [2024-11-20 17:39:45.931399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.792 [2024-11-20 17:39:45.931583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.792 [2024-11-20 17:39:45.931617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.168 Running I/O for 1 seconds... 00:05:20.168 lcore 0: 109223 00:05:20.168 lcore 1: 109224 00:05:20.168 lcore 2: 109225 00:05:20.168 lcore 3: 109226 00:05:20.168 done. 00:05:20.168 00:05:20.168 real 0m1.610s 00:05:20.168 user 0m4.347s 00:05:20.168 sys 0m0.131s 00:05:20.168 17:39:47 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.168 ************************************ 00:05:20.168 END TEST event_perf 00:05:20.168 ************************************ 00:05:20.168 17:39:47 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.168 17:39:47 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.168 17:39:47 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:20.168 17:39:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.168 17:39:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.168 ************************************ 00:05:20.168 START TEST event_reactor 00:05:20.168 ************************************ 00:05:20.168 17:39:47 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.168 [2024-11-20 17:39:47.303922] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:20.168 [2024-11-20 17:39:47.304045] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59020 ] 00:05:20.425 [2024-11-20 17:39:47.483644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.683 [2024-11-20 17:39:47.604274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.060 test_start 00:05:22.060 oneshot 00:05:22.060 tick 100 00:05:22.060 tick 100 00:05:22.060 tick 250 00:05:22.060 tick 100 00:05:22.060 tick 100 00:05:22.060 tick 250 00:05:22.060 tick 100 00:05:22.060 tick 500 00:05:22.060 tick 100 00:05:22.060 tick 100 00:05:22.060 tick 250 00:05:22.060 tick 100 00:05:22.060 tick 100 00:05:22.060 test_end 00:05:22.060 00:05:22.060 real 0m1.589s 00:05:22.060 user 0m1.369s 00:05:22.060 sys 0m0.111s 00:05:22.060 17:39:48 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.060 17:39:48 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.060 ************************************ 00:05:22.060 END TEST event_reactor 00:05:22.060 ************************************ 00:05:22.060 17:39:48 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.061 17:39:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:22.061 17:39:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.061 17:39:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.061 ************************************ 00:05:22.061 START TEST event_reactor_perf 00:05:22.061 ************************************ 00:05:22.061 17:39:48 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.061 [2024-11-20 17:39:48.954298] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:22.061 [2024-11-20 17:39:48.954414] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59056 ] 00:05:22.061 [2024-11-20 17:39:49.137021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.320 [2024-11-20 17:39:49.251109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.310 test_start 00:05:23.310 test_end 00:05:23.310 Performance: 377978 events per second 00:05:23.310 00:05:23.310 real 0m1.574s 00:05:23.310 user 0m1.366s 00:05:23.310 sys 0m0.100s 00:05:23.310 17:39:50 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.310 ************************************ 00:05:23.310 END TEST event_reactor_perf 00:05:23.310 ************************************ 00:05:23.310 17:39:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.569 17:39:50 event -- event/event.sh@49 -- # uname -s 00:05:23.569 17:39:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.569 17:39:50 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.569 17:39:50 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.569 17:39:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.569 17:39:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.569 ************************************ 00:05:23.569 START TEST event_scheduler 00:05:23.569 ************************************ 00:05:23.569 17:39:50 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.569 * Looking for test storage... 00:05:23.569 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:23.569 17:39:50 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:23.569 17:39:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:05:23.569 17:39:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.829 17:39:50 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.829 --rc genhtml_branch_coverage=1 00:05:23.829 --rc genhtml_function_coverage=1 00:05:23.829 --rc genhtml_legend=1 00:05:23.829 --rc geninfo_all_blocks=1 00:05:23.829 --rc geninfo_unexecuted_blocks=1 00:05:23.829 00:05:23.829 ' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.829 --rc genhtml_branch_coverage=1 00:05:23.829 --rc genhtml_function_coverage=1 00:05:23.829 --rc genhtml_legend=1 00:05:23.829 --rc geninfo_all_blocks=1 00:05:23.829 --rc geninfo_unexecuted_blocks=1 00:05:23.829 00:05:23.829 ' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.829 --rc genhtml_branch_coverage=1 00:05:23.829 --rc genhtml_function_coverage=1 00:05:23.829 --rc genhtml_legend=1 00:05:23.829 --rc geninfo_all_blocks=1 00:05:23.829 --rc geninfo_unexecuted_blocks=1 00:05:23.829 00:05:23.829 ' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:23.829 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.829 --rc genhtml_branch_coverage=1 00:05:23.829 --rc genhtml_function_coverage=1 00:05:23.829 --rc genhtml_legend=1 00:05:23.829 --rc geninfo_all_blocks=1 00:05:23.829 --rc geninfo_unexecuted_blocks=1 00:05:23.829 00:05:23.829 ' 00:05:23.829 17:39:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.829 17:39:50 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59127 00:05:23.829 17:39:50 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.829 17:39:50 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.829 17:39:50 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59127 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59127 ']' 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.829 17:39:50 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.829 [2024-11-20 17:39:50.882834] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:23.829 [2024-11-20 17:39:50.883500] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59127 ] 00:05:24.088 [2024-11-20 17:39:51.067503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:24.088 [2024-11-20 17:39:51.191913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.088 [2024-11-20 17:39:51.192095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.088 [2024-11-20 17:39:51.192206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:24.088 [2024-11-20 17:39:51.192238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:24.655 17:39:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.655 POWER: Cannot set governor of lcore 0 to performance 00:05:24.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.655 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.655 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.655 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:24.655 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:24.655 POWER: Unable to set Power Management Environment for lcore 0 00:05:24.655 [2024-11-20 17:39:51.750134] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:05:24.655 [2024-11-20 17:39:51.750188] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:05:24.655 [2024-11-20 17:39:51.750291] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.655 [2024-11-20 17:39:51.750347] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.655 [2024-11-20 17:39:51.750384] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.655 [2024-11-20 17:39:51.750419] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:24.655 17:39:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:24.655 17:39:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 [2024-11-20 17:39:52.088946] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:25.223 17:39:52 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:25.223 17:39:52 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.223 17:39:52 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 ************************************ 00:05:25.223 START TEST scheduler_create_thread 00:05:25.223 ************************************ 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 2 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 3 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 4 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 5 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 6 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 7 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 8 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 9 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.223 10 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.223 17:39:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.600 17:39:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:26.600 17:39:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:26.600 17:39:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:26.600 17:39:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:26.600 17:39:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.535 17:39:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:27.535 17:39:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:27.535 17:39:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:27.535 17:39:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.162 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:28.162 17:39:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:28.162 17:39:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:28.162 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:28.162 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.100 ************************************ 00:05:29.100 END TEST scheduler_create_thread 00:05:29.100 ************************************ 00:05:29.100 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:29.100 00:05:29.100 real 0m3.884s 00:05:29.100 user 0m0.030s 00:05:29.100 sys 0m0.009s 00:05:29.100 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.100 17:39:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.100 17:39:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:29.100 17:39:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59127 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59127 ']' 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59127 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59127 00:05:29.100 killing process with pid 59127 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59127' 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59127 00:05:29.100 17:39:56 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59127 00:05:29.359 [2024-11-20 17:39:56.371344] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:30.737 ************************************ 00:05:30.737 END TEST event_scheduler 00:05:30.737 ************************************ 00:05:30.737 00:05:30.737 real 0m6.998s 00:05:30.737 user 0m14.443s 00:05:30.737 sys 0m0.558s 00:05:30.737 17:39:57 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.737 17:39:57 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:30.737 17:39:57 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:30.737 17:39:57 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:30.737 17:39:57 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.737 17:39:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.737 17:39:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:30.737 ************************************ 00:05:30.737 START TEST app_repeat 00:05:30.737 ************************************ 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:30.737 Process app_repeat pid: 59255 00:05:30.737 spdk_app_start Round 0 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59255 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59255' 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59255 /var/tmp/spdk-nbd.sock 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.737 17:39:57 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.737 17:39:57 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.737 [2024-11-20 17:39:57.717823] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:30.737 [2024-11-20 17:39:57.717967] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:05:30.996 [2024-11-20 17:39:57.911863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.996 [2024-11-20 17:39:58.028419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.996 [2024-11-20 17:39:58.028458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.563 17:39:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.563 17:39:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:31.563 17:39:58 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.821 Malloc0 00:05:31.821 17:39:58 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.080 Malloc1 00:05:32.080 17:39:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.080 17:39:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.339 /dev/nbd0 00:05:32.339 17:39:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.339 17:39:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.339 1+0 records in 00:05:32.339 1+0 records out 00:05:32.339 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000395357 s, 10.4 MB/s 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.339 17:39:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.339 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.339 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.340 17:39:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:32.598 /dev/nbd1 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.598 1+0 records in 00:05:32.598 1+0 records out 00:05:32.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000339586 s, 12.1 MB/s 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:32.598 17:39:59 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.598 17:39:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.857 17:39:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:32.857 { 00:05:32.857 "nbd_device": "/dev/nbd0", 00:05:32.857 "bdev_name": "Malloc0" 00:05:32.857 }, 00:05:32.857 { 00:05:32.857 "nbd_device": "/dev/nbd1", 00:05:32.857 "bdev_name": "Malloc1" 00:05:32.857 } 00:05:32.857 ]' 00:05:32.857 17:39:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:32.857 { 00:05:32.857 "nbd_device": "/dev/nbd0", 00:05:32.857 "bdev_name": "Malloc0" 00:05:32.857 }, 00:05:32.857 { 00:05:32.857 "nbd_device": "/dev/nbd1", 00:05:32.857 "bdev_name": "Malloc1" 00:05:32.857 } 00:05:32.857 ]' 00:05:32.857 17:39:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:32.857 /dev/nbd1' 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:32.857 /dev/nbd1' 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:32.857 256+0 records in 00:05:32.857 256+0 records out 00:05:32.857 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104184 s, 101 MB/s 00:05:32.857 17:40:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.116 256+0 records in 00:05:33.116 256+0 records out 00:05:33.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0297516 s, 35.2 MB/s 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.116 256+0 records in 00:05:33.116 256+0 records out 00:05:33.116 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0350936 s, 29.9 MB/s 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.116 17:40:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.375 17:40:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.635 17:40:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:33.893 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:33.894 17:40:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:33.894 17:40:00 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.462 17:40:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.399 [2024-11-20 17:40:02.518495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.659 [2024-11-20 17:40:02.633418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.659 [2024-11-20 17:40:02.633423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.659 [2024-11-20 17:40:02.831184] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:35.659 [2024-11-20 17:40:02.831469] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.561 17:40:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.561 spdk_app_start Round 1 00:05:37.561 17:40:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:37.561 17:40:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59255 /var/tmp/spdk-nbd.sock 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.561 17:40:04 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:37.561 17:40:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.127 Malloc0 00:05:38.128 17:40:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.385 Malloc1 00:05:38.385 17:40:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.385 17:40:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.386 17:40:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.645 /dev/nbd0 00:05:38.645 17:40:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.645 17:40:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.645 1+0 records in 00:05:38.645 1+0 records out 00:05:38.645 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382691 s, 10.7 MB/s 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.645 17:40:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.645 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.645 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.645 17:40:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.903 /dev/nbd1 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.903 1+0 records in 00:05:38.903 1+0 records out 00:05:38.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000383581 s, 10.7 MB/s 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:38.903 17:40:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.903 17:40:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.161 17:40:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.162 { 00:05:39.162 "nbd_device": "/dev/nbd0", 00:05:39.162 "bdev_name": "Malloc0" 00:05:39.162 }, 00:05:39.162 { 00:05:39.162 "nbd_device": "/dev/nbd1", 00:05:39.162 "bdev_name": "Malloc1" 00:05:39.162 } 00:05:39.162 ]' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.162 { 00:05:39.162 "nbd_device": "/dev/nbd0", 00:05:39.162 "bdev_name": "Malloc0" 00:05:39.162 }, 00:05:39.162 { 00:05:39.162 "nbd_device": "/dev/nbd1", 00:05:39.162 "bdev_name": "Malloc1" 00:05:39.162 } 00:05:39.162 ]' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.162 /dev/nbd1' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.162 /dev/nbd1' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.162 256+0 records in 00:05:39.162 256+0 records out 00:05:39.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0136538 s, 76.8 MB/s 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.162 256+0 records in 00:05:39.162 256+0 records out 00:05:39.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262548 s, 39.9 MB/s 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.162 17:40:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.420 256+0 records in 00:05:39.420 256+0 records out 00:05:39.420 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0320479 s, 32.7 MB/s 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.420 17:40:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.421 17:40:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.679 17:40:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.679 17:40:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.679 17:40:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.679 17:40:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.679 17:40:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.680 17:40:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.938 17:40:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.938 17:40:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.504 17:40:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.887 [2024-11-20 17:40:08.665054] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.887 [2024-11-20 17:40:08.783174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.887 [2024-11-20 17:40:08.783202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.887 [2024-11-20 17:40:08.986749] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.887 [2024-11-20 17:40:08.986852] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.790 spdk_app_start Round 2 00:05:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.790 17:40:10 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:43.790 17:40:10 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:43.790 17:40:10 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59255 /var/tmp/spdk-nbd.sock 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.790 17:40:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:43.790 17:40:10 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.047 Malloc0 00:05:44.047 17:40:11 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:44.306 Malloc1 00:05:44.306 17:40:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.306 17:40:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:44.563 /dev/nbd0 00:05:44.563 17:40:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:44.563 17:40:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.563 1+0 records in 00:05:44.563 1+0 records out 00:05:44.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000475512 s, 8.6 MB/s 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.563 17:40:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.563 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.563 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.563 17:40:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:44.837 /dev/nbd1 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:44.837 1+0 records in 00:05:44.837 1+0 records out 00:05:44.837 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429033 s, 9.5 MB/s 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:44.837 17:40:11 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.837 17:40:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:45.096 17:40:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:45.096 { 00:05:45.096 "nbd_device": "/dev/nbd0", 00:05:45.096 "bdev_name": "Malloc0" 00:05:45.096 }, 00:05:45.096 { 00:05:45.096 "nbd_device": "/dev/nbd1", 00:05:45.096 "bdev_name": "Malloc1" 00:05:45.096 } 00:05:45.096 ]' 00:05:45.096 17:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:45.096 { 00:05:45.096 "nbd_device": "/dev/nbd0", 00:05:45.096 "bdev_name": "Malloc0" 00:05:45.096 }, 00:05:45.096 { 00:05:45.096 "nbd_device": "/dev/nbd1", 00:05:45.096 "bdev_name": "Malloc1" 00:05:45.096 } 00:05:45.096 ]' 00:05:45.096 17:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:45.097 /dev/nbd1' 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:45.097 /dev/nbd1' 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:45.097 256+0 records in 00:05:45.097 256+0 records out 00:05:45.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130466 s, 80.4 MB/s 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:45.097 256+0 records in 00:05:45.097 256+0 records out 00:05:45.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0302292 s, 34.7 MB/s 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:45.097 17:40:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:45.355 256+0 records in 00:05:45.355 256+0 records out 00:05:45.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0348606 s, 30.1 MB/s 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.355 17:40:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:45.615 17:40:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.873 17:40:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:46.134 17:40:13 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:46.134 17:40:13 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:46.701 17:40:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.080 [2024-11-20 17:40:14.857673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.080 [2024-11-20 17:40:14.974736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.080 [2024-11-20 17:40:14.974739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.080 [2024-11-20 17:40:15.174573] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.080 [2024-11-20 17:40:15.174635] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:49.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:49.982 17:40:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59255 /var/tmp/spdk-nbd.sock 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:49.982 17:40:16 event.app_repeat -- event/event.sh@39 -- # killprocess 59255 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59255 ']' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59255 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59255 00:05:49.982 killing process with pid 59255 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59255' 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59255 00:05:49.982 17:40:16 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59255 00:05:50.917 spdk_app_start is called in Round 0. 00:05:50.917 Shutdown signal received, stop current app iteration 00:05:50.917 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:05:50.917 spdk_app_start is called in Round 1. 00:05:50.917 Shutdown signal received, stop current app iteration 00:05:50.917 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:05:50.917 spdk_app_start is called in Round 2. 00:05:50.917 Shutdown signal received, stop current app iteration 00:05:50.917 Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 reinitialization... 00:05:50.917 spdk_app_start is called in Round 3. 00:05:50.917 Shutdown signal received, stop current app iteration 00:05:50.917 17:40:18 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:50.917 17:40:18 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:50.917 00:05:50.917 real 0m20.413s 00:05:50.917 user 0m43.816s 00:05:50.917 sys 0m3.434s 00:05:50.917 ************************************ 00:05:50.917 END TEST app_repeat 00:05:50.917 ************************************ 00:05:50.917 17:40:18 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.917 17:40:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:51.175 17:40:18 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:51.175 17:40:18 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.175 17:40:18 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.175 17:40:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.175 17:40:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.175 ************************************ 00:05:51.175 START TEST cpu_locks 00:05:51.175 ************************************ 00:05:51.175 17:40:18 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:51.175 * Looking for test storage... 00:05:51.175 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.175 17:40:18 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:51.175 17:40:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:51.175 17:40:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:51.175 17:40:18 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:51.175 17:40:18 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.434 17:40:18 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:51.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.434 --rc genhtml_branch_coverage=1 00:05:51.434 --rc genhtml_function_coverage=1 00:05:51.434 --rc genhtml_legend=1 00:05:51.434 --rc geninfo_all_blocks=1 00:05:51.434 --rc geninfo_unexecuted_blocks=1 00:05:51.434 00:05:51.434 ' 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:51.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.434 --rc genhtml_branch_coverage=1 00:05:51.434 --rc genhtml_function_coverage=1 00:05:51.434 --rc genhtml_legend=1 00:05:51.434 --rc geninfo_all_blocks=1 00:05:51.434 --rc geninfo_unexecuted_blocks=1 00:05:51.434 00:05:51.434 ' 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:51.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.434 --rc genhtml_branch_coverage=1 00:05:51.434 --rc genhtml_function_coverage=1 00:05:51.434 --rc genhtml_legend=1 00:05:51.434 --rc geninfo_all_blocks=1 00:05:51.434 --rc geninfo_unexecuted_blocks=1 00:05:51.434 00:05:51.434 ' 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:51.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.434 --rc genhtml_branch_coverage=1 00:05:51.434 --rc genhtml_function_coverage=1 00:05:51.434 --rc genhtml_legend=1 00:05:51.434 --rc geninfo_all_blocks=1 00:05:51.434 --rc geninfo_unexecuted_blocks=1 00:05:51.434 00:05:51.434 ' 00:05:51.434 17:40:18 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:51.434 17:40:18 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:51.434 17:40:18 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:51.434 17:40:18 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.434 17:40:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.434 ************************************ 00:05:51.434 START TEST default_locks 00:05:51.434 ************************************ 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59714 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59714 00:05:51.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59714 ']' 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:51.434 17:40:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:51.434 [2024-11-20 17:40:18.494193] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:51.434 [2024-11-20 17:40:18.494322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59714 ] 00:05:51.692 [2024-11-20 17:40:18.678934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.692 [2024-11-20 17:40:18.801504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.669 17:40:19 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:52.669 17:40:19 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:52.669 17:40:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59714 00:05:52.669 17:40:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59714 00:05:52.669 17:40:19 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59714 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59714 ']' 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59714 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59714 00:05:53.236 killing process with pid 59714 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59714' 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59714 00:05:53.236 17:40:20 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59714 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59714 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59714 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59714 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59714 ']' 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.768 ERROR: process (pid: 59714) is no longer running 00:05:55.768 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59714) - No such process 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:55.768 00:05:55.768 real 0m4.309s 00:05:55.768 user 0m4.294s 00:05:55.768 sys 0m0.703s 00:05:55.768 ************************************ 00:05:55.768 END TEST default_locks 00:05:55.768 ************************************ 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.768 17:40:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.768 17:40:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:55.768 17:40:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.768 17:40:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.768 17:40:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:55.768 ************************************ 00:05:55.768 START TEST default_locks_via_rpc 00:05:55.768 ************************************ 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59790 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59790 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59790 ']' 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.768 17:40:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.768 [2024-11-20 17:40:22.905797] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:05:55.768 [2024-11-20 17:40:22.905926] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59790 ] 00:05:56.027 [2024-11-20 17:40:23.087368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.285 [2024-11-20 17:40:23.205077] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59790 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59790 00:05:57.221 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59790 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59790 ']' 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59790 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59790 00:05:57.479 killing process with pid 59790 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59790' 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59790 00:05:57.479 17:40:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59790 00:06:00.040 ************************************ 00:06:00.040 END TEST default_locks_via_rpc 00:06:00.040 ************************************ 00:06:00.040 00:06:00.040 real 0m4.318s 00:06:00.040 user 0m4.340s 00:06:00.040 sys 0m0.755s 00:06:00.040 17:40:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.040 17:40:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.040 17:40:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:00.040 17:40:27 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.040 17:40:27 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.040 17:40:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:00.040 ************************************ 00:06:00.040 START TEST non_locking_app_on_locked_coremask 00:06:00.040 ************************************ 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59864 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59864 /var/tmp/spdk.sock 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59864 ']' 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.040 17:40:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:00.300 [2024-11-20 17:40:27.271796] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:00.300 [2024-11-20 17:40:27.272175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:06:00.300 [2024-11-20 17:40:27.456249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.559 [2024-11-20 17:40:27.583166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59889 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59889 /var/tmp/spdk2.sock 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59889 ']' 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:01.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.494 17:40:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.494 [2024-11-20 17:40:28.556193] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:01.494 [2024-11-20 17:40:28.556598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59889 ] 00:06:01.753 [2024-11-20 17:40:28.745950] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.753 [2024-11-20 17:40:28.746031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.011 [2024-11-20 17:40:28.990338] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.545 17:40:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.545 17:40:31 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.545 17:40:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59864 00:06:04.545 17:40:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59864 00:06:04.545 17:40:31 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59864 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59864 ']' 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59864 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59864 00:06:05.116 killing process with pid 59864 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59864' 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59864 00:06:05.116 17:40:32 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59864 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59889 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59889 ']' 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59889 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59889 00:06:10.389 killing process with pid 59889 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59889' 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59889 00:06:10.389 17:40:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59889 00:06:12.295 00:06:12.295 real 0m12.226s 00:06:12.295 user 0m12.574s 00:06:12.295 sys 0m1.477s 00:06:12.295 17:40:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.295 ************************************ 00:06:12.295 END TEST non_locking_app_on_locked_coremask 00:06:12.295 ************************************ 00:06:12.295 17:40:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.295 17:40:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:12.295 17:40:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.295 17:40:39 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.295 17:40:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.295 ************************************ 00:06:12.295 START TEST locking_app_on_unlocked_coremask 00:06:12.295 ************************************ 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:12.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60039 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60039 /var/tmp/spdk.sock 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60039 ']' 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.295 17:40:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.554 [2024-11-20 17:40:39.571273] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:12.554 [2024-11-20 17:40:39.571678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60039 ] 00:06:12.814 [2024-11-20 17:40:39.754167] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.814 [2024-11-20 17:40:39.754446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.814 [2024-11-20 17:40:39.872978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60066 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60066 /var/tmp/spdk2.sock 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60066 ']' 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:13.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.750 17:40:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.750 [2024-11-20 17:40:40.907756] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:13.750 [2024-11-20 17:40:40.908145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60066 ] 00:06:14.032 [2024-11-20 17:40:41.091693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.311 [2024-11-20 17:40:41.324721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.846 17:40:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.846 17:40:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.846 17:40:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60066 00:06:16.846 17:40:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60066 00:06:16.846 17:40:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60039 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60039 ']' 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60039 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60039 00:06:17.441 killing process with pid 60039 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60039' 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60039 00:06:17.441 17:40:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60039 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60066 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60066 ']' 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60066 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60066 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.717 killing process with pid 60066 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60066' 00:06:22.717 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60066 00:06:22.718 17:40:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60066 00:06:24.645 ************************************ 00:06:24.645 END TEST locking_app_on_unlocked_coremask 00:06:24.645 ************************************ 00:06:24.645 00:06:24.645 real 0m12.269s 00:06:24.645 user 0m12.586s 00:06:24.645 sys 0m1.457s 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.645 17:40:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.645 17:40:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:24.645 17:40:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.645 17:40:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.645 ************************************ 00:06:24.645 START TEST locking_app_on_locked_coremask 00:06:24.645 ************************************ 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60214 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60214 /var/tmp/spdk.sock 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60214 ']' 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:24.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:24.645 17:40:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.904 [2024-11-20 17:40:51.904192] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:24.904 [2024-11-20 17:40:51.904327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:06:24.904 [2024-11-20 17:40:52.075447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.164 [2024-11-20 17:40:52.186111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60236 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60236 /var/tmp/spdk2.sock 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60236 /var/tmp/spdk2.sock 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60236 /var/tmp/spdk2.sock 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60236 ']' 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.104 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.104 [2024-11-20 17:40:53.166489] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:26.104 [2024-11-20 17:40:53.166677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:06:26.364 [2024-11-20 17:40:53.360854] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60214 has claimed it. 00:06:26.364 [2024-11-20 17:40:53.360934] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:26.624 ERROR: process (pid: 60236) is no longer running 00:06:26.624 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60236) - No such process 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60214 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.624 17:40:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60214 00:06:27.191 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60214 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60214 ']' 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60214 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60214 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.192 killing process with pid 60214 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60214' 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60214 00:06:27.192 17:40:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60214 00:06:29.727 ************************************ 00:06:29.727 END TEST locking_app_on_locked_coremask 00:06:29.727 ************************************ 00:06:29.727 00:06:29.727 real 0m4.978s 00:06:29.727 user 0m5.190s 00:06:29.727 sys 0m0.885s 00:06:29.727 17:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.727 17:40:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.727 17:40:56 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:29.727 17:40:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.727 17:40:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.727 17:40:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.727 ************************************ 00:06:29.727 START TEST locking_overlapped_coremask 00:06:29.727 ************************************ 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:29.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60305 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60305 /var/tmp/spdk.sock 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60305 ']' 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:29.727 17:40:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.041 [2024-11-20 17:40:56.942386] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:30.041 [2024-11-20 17:40:56.942530] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60305 ] 00:06:30.041 [2024-11-20 17:40:57.129756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:30.299 [2024-11-20 17:40:57.253104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.299 [2024-11-20 17:40:57.253199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.299 [2024-11-20 17:40:57.253161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60323 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60323 /var/tmp/spdk2.sock 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60323 /var/tmp/spdk2.sock 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60323 /var/tmp/spdk2.sock 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60323 ']' 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:31.236 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.236 [2024-11-20 17:40:58.264050] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:31.236 [2024-11-20 17:40:58.264392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60323 ] 00:06:31.495 [2024-11-20 17:40:58.450061] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60305 has claimed it. 00:06:31.495 [2024-11-20 17:40:58.450152] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:31.755 ERROR: process (pid: 60323) is no longer running 00:06:31.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60323) - No such process 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60305 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60305 ']' 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60305 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60305 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.755 killing process with pid 60305 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60305' 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60305 00:06:31.755 17:40:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60305 00:06:34.287 ************************************ 00:06:34.287 END TEST locking_overlapped_coremask 00:06:34.287 ************************************ 00:06:34.287 00:06:34.287 real 0m4.550s 00:06:34.287 user 0m12.354s 00:06:34.287 sys 0m0.657s 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 17:41:01 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:34.287 17:41:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.287 17:41:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.287 17:41:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:34.287 ************************************ 00:06:34.287 START TEST locking_overlapped_coremask_via_rpc 00:06:34.287 ************************************ 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60393 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60393 /var/tmp/spdk.sock 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60393 ']' 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:34.287 17:41:01 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.545 [2024-11-20 17:41:01.554978] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:34.545 [2024-11-20 17:41:01.555112] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:06:34.803 [2024-11-20 17:41:01.738496] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:34.803 [2024-11-20 17:41:01.738574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:34.803 [2024-11-20 17:41:01.864599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.803 [2024-11-20 17:41:01.864751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.803 [2024-11-20 17:41:01.864814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60411 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60411 /var/tmp/spdk2.sock 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60411 ']' 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:35.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.806 17:41:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.806 [2024-11-20 17:41:02.875094] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:35.806 [2024-11-20 17:41:02.875647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60411 ] 00:06:36.066 [2024-11-20 17:41:03.063780] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:36.066 [2024-11-20 17:41:03.067863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:36.325 [2024-11-20 17:41:03.316415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:36.325 [2024-11-20 17:41:03.319855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:36.325 [2024-11-20 17:41:03.319886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.861 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.862 [2024-11-20 17:41:05.554035] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60393 has claimed it. 00:06:38.862 request: 00:06:38.862 { 00:06:38.862 "method": "framework_enable_cpumask_locks", 00:06:38.862 "req_id": 1 00:06:38.862 } 00:06:38.862 Got JSON-RPC error response 00:06:38.862 response: 00:06:38.862 { 00:06:38.862 "code": -32603, 00:06:38.862 "message": "Failed to claim CPU core: 2" 00:06:38.862 } 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60393 /var/tmp/spdk.sock 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60393 ']' 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60411 /var/tmp/spdk2.sock 00:06:38.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60411 ']' 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.862 17:41:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.862 ************************************ 00:06:38.862 END TEST locking_overlapped_coremask_via_rpc 00:06:38.862 ************************************ 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:38.862 00:06:38.862 real 0m4.567s 00:06:38.862 user 0m1.380s 00:06:38.862 sys 0m0.237s 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.862 17:41:06 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.121 17:41:06 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:39.121 17:41:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60393 ]] 00:06:39.121 17:41:06 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60393 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60393 ']' 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60393 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60393 00:06:39.121 killing process with pid 60393 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60393' 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60393 00:06:39.121 17:41:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60393 00:06:41.691 17:41:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60411 ]] 00:06:41.691 17:41:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60411 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60411 ']' 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60411 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60411 00:06:41.691 killing process with pid 60411 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60411' 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60411 00:06:41.691 17:41:08 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60411 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60393 ]] 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60393 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60393 ']' 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60393 00:06:44.227 Process with pid 60393 is not found 00:06:44.227 Process with pid 60411 is not found 00:06:44.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60393) - No such process 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60393 is not found' 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60411 ]] 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60411 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60411 ']' 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60411 00:06:44.227 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60411) - No such process 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60411 is not found' 00:06:44.227 17:41:11 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:44.227 00:06:44.227 real 0m52.983s 00:06:44.227 user 1m29.883s 00:06:44.227 sys 0m7.510s 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.227 17:41:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 END TEST cpu_locks 00:06:44.227 ************************************ 00:06:44.227 ************************************ 00:06:44.227 END TEST event 00:06:44.227 ************************************ 00:06:44.227 00:06:44.227 real 1m25.834s 00:06:44.227 user 2m35.485s 00:06:44.227 sys 0m12.245s 00:06:44.227 17:41:11 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.227 17:41:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 17:41:11 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.227 17:41:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.227 17:41:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.227 17:41:11 -- common/autotest_common.sh@10 -- # set +x 00:06:44.227 ************************************ 00:06:44.227 START TEST thread 00:06:44.227 ************************************ 00:06:44.227 17:41:11 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:44.227 * Looking for test storage... 00:06:44.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:44.227 17:41:11 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.227 17:41:11 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.227 17:41:11 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.487 17:41:11 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.487 17:41:11 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.487 17:41:11 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.487 17:41:11 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.487 17:41:11 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.487 17:41:11 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.487 17:41:11 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.487 17:41:11 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.487 17:41:11 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.487 17:41:11 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.487 17:41:11 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.487 17:41:11 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.487 17:41:11 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:44.487 17:41:11 thread -- scripts/common.sh@345 -- # : 1 00:06:44.487 17:41:11 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.487 17:41:11 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.487 17:41:11 thread -- scripts/common.sh@365 -- # decimal 1 00:06:44.487 17:41:11 thread -- scripts/common.sh@353 -- # local d=1 00:06:44.487 17:41:11 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.487 17:41:11 thread -- scripts/common.sh@355 -- # echo 1 00:06:44.487 17:41:11 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.487 17:41:11 thread -- scripts/common.sh@366 -- # decimal 2 00:06:44.487 17:41:11 thread -- scripts/common.sh@353 -- # local d=2 00:06:44.487 17:41:11 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.487 17:41:11 thread -- scripts/common.sh@355 -- # echo 2 00:06:44.487 17:41:11 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.487 17:41:11 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.487 17:41:11 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.487 17:41:11 thread -- scripts/common.sh@368 -- # return 0 00:06:44.487 17:41:11 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.487 17:41:11 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.487 --rc genhtml_branch_coverage=1 00:06:44.487 --rc genhtml_function_coverage=1 00:06:44.487 --rc genhtml_legend=1 00:06:44.487 --rc geninfo_all_blocks=1 00:06:44.487 --rc geninfo_unexecuted_blocks=1 00:06:44.487 00:06:44.487 ' 00:06:44.487 17:41:11 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.487 --rc genhtml_branch_coverage=1 00:06:44.487 --rc genhtml_function_coverage=1 00:06:44.487 --rc genhtml_legend=1 00:06:44.487 --rc geninfo_all_blocks=1 00:06:44.487 --rc geninfo_unexecuted_blocks=1 00:06:44.487 00:06:44.487 ' 00:06:44.487 17:41:11 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.487 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.487 --rc genhtml_branch_coverage=1 00:06:44.488 --rc genhtml_function_coverage=1 00:06:44.488 --rc genhtml_legend=1 00:06:44.488 --rc geninfo_all_blocks=1 00:06:44.488 --rc geninfo_unexecuted_blocks=1 00:06:44.488 00:06:44.488 ' 00:06:44.488 17:41:11 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.488 --rc genhtml_branch_coverage=1 00:06:44.488 --rc genhtml_function_coverage=1 00:06:44.488 --rc genhtml_legend=1 00:06:44.488 --rc geninfo_all_blocks=1 00:06:44.488 --rc geninfo_unexecuted_blocks=1 00:06:44.488 00:06:44.488 ' 00:06:44.488 17:41:11 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.488 17:41:11 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:44.488 17:41:11 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.488 17:41:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.488 ************************************ 00:06:44.488 START TEST thread_poller_perf 00:06:44.488 ************************************ 00:06:44.488 17:41:11 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:44.488 [2024-11-20 17:41:11.530041] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:44.488 [2024-11-20 17:41:11.530346] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:06:44.747 [2024-11-20 17:41:11.713445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.747 [2024-11-20 17:41:11.833326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.747 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:46.149 [2024-11-20T17:41:13.325Z] ====================================== 00:06:46.149 [2024-11-20T17:41:13.325Z] busy:2503528388 (cyc) 00:06:46.149 [2024-11-20T17:41:13.325Z] total_run_count: 387000 00:06:46.149 [2024-11-20T17:41:13.325Z] tsc_hz: 2490000000 (cyc) 00:06:46.149 [2024-11-20T17:41:13.325Z] ====================================== 00:06:46.149 [2024-11-20T17:41:13.325Z] poller_cost: 6469 (cyc), 2597 (nsec) 00:06:46.149 00:06:46.149 real 0m1.602s 00:06:46.149 user 0m1.372s 00:06:46.149 sys 0m0.121s 00:06:46.150 ************************************ 00:06:46.150 END TEST thread_poller_perf 00:06:46.150 ************************************ 00:06:46.150 17:41:13 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:46.150 17:41:13 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:46.150 17:41:13 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.150 17:41:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:46.150 17:41:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:46.150 17:41:13 thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.150 ************************************ 00:06:46.150 START TEST thread_poller_perf 00:06:46.150 ************************************ 00:06:46.150 17:41:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:46.150 [2024-11-20 17:41:13.209231] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:46.150 [2024-11-20 17:41:13.209366] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60648 ] 00:06:46.408 [2024-11-20 17:41:13.389881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.408 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:46.408 [2024-11-20 17:41:13.506629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.785 [2024-11-20T17:41:14.961Z] ====================================== 00:06:47.785 [2024-11-20T17:41:14.961Z] busy:2493767250 (cyc) 00:06:47.785 [2024-11-20T17:41:14.961Z] total_run_count: 5114000 00:06:47.785 [2024-11-20T17:41:14.961Z] tsc_hz: 2490000000 (cyc) 00:06:47.785 [2024-11-20T17:41:14.961Z] ====================================== 00:06:47.785 [2024-11-20T17:41:14.961Z] poller_cost: 487 (cyc), 195 (nsec) 00:06:47.785 00:06:47.785 real 0m1.585s 00:06:47.785 user 0m1.360s 00:06:47.785 sys 0m0.116s 00:06:47.785 17:41:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.785 17:41:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:47.785 ************************************ 00:06:47.785 END TEST thread_poller_perf 00:06:47.785 ************************************ 00:06:47.785 17:41:14 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:47.785 00:06:47.785 real 0m3.574s 00:06:47.785 user 0m2.908s 00:06:47.785 sys 0m0.463s 00:06:47.785 ************************************ 00:06:47.785 END TEST thread 00:06:47.785 ************************************ 00:06:47.785 17:41:14 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:47.785 17:41:14 thread -- common/autotest_common.sh@10 -- # set +x 00:06:47.785 17:41:14 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:47.785 17:41:14 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:47.785 17:41:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:47.785 17:41:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:47.785 17:41:14 -- common/autotest_common.sh@10 -- # set +x 00:06:47.785 ************************************ 00:06:47.785 START TEST app_cmdline 00:06:47.785 ************************************ 00:06:47.785 17:41:14 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:48.044 * Looking for test storage... 00:06:48.044 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.044 17:41:15 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.044 --rc genhtml_branch_coverage=1 00:06:48.044 --rc genhtml_function_coverage=1 00:06:48.044 --rc genhtml_legend=1 00:06:48.044 --rc geninfo_all_blocks=1 00:06:48.044 --rc geninfo_unexecuted_blocks=1 00:06:48.044 00:06:48.044 ' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.044 --rc genhtml_branch_coverage=1 00:06:48.044 --rc genhtml_function_coverage=1 00:06:48.044 --rc genhtml_legend=1 00:06:48.044 --rc geninfo_all_blocks=1 00:06:48.044 --rc geninfo_unexecuted_blocks=1 00:06:48.044 00:06:48.044 ' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.044 --rc genhtml_branch_coverage=1 00:06:48.044 --rc genhtml_function_coverage=1 00:06:48.044 --rc genhtml_legend=1 00:06:48.044 --rc geninfo_all_blocks=1 00:06:48.044 --rc geninfo_unexecuted_blocks=1 00:06:48.044 00:06:48.044 ' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:48.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.044 --rc genhtml_branch_coverage=1 00:06:48.044 --rc genhtml_function_coverage=1 00:06:48.044 --rc genhtml_legend=1 00:06:48.044 --rc geninfo_all_blocks=1 00:06:48.044 --rc geninfo_unexecuted_blocks=1 00:06:48.044 00:06:48.044 ' 00:06:48.044 17:41:15 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:48.044 17:41:15 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60737 00:06:48.044 17:41:15 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60737 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60737 ']' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:48.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:48.044 17:41:15 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:48.044 17:41:15 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:48.044 [2024-11-20 17:41:15.216307] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:48.044 [2024-11-20 17:41:15.216437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60737 ] 00:06:48.303 [2024-11-20 17:41:15.402928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.562 [2024-11-20 17:41:15.522615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.500 17:41:16 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:49.500 17:41:16 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:49.500 { 00:06:49.500 "version": "SPDK v25.01-pre git sha1 09ac735c8", 00:06:49.500 "fields": { 00:06:49.500 "major": 25, 00:06:49.500 "minor": 1, 00:06:49.500 "patch": 0, 00:06:49.500 "suffix": "-pre", 00:06:49.500 "commit": "09ac735c8" 00:06:49.500 } 00:06:49.500 } 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:49.500 17:41:16 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:49.500 17:41:16 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:49.500 17:41:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:49.500 17:41:16 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:49.759 17:41:16 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:49.759 17:41:16 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:49.759 17:41:16 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:49.759 request: 00:06:49.759 { 00:06:49.759 "method": "env_dpdk_get_mem_stats", 00:06:49.759 "req_id": 1 00:06:49.759 } 00:06:49.759 Got JSON-RPC error response 00:06:49.759 response: 00:06:49.759 { 00:06:49.759 "code": -32601, 00:06:49.759 "message": "Method not found" 00:06:49.759 } 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:49.759 17:41:16 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60737 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60737 ']' 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60737 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:49.759 17:41:16 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60737 00:06:50.018 17:41:16 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.018 17:41:16 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.018 killing process with pid 60737 00:06:50.018 17:41:16 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60737' 00:06:50.018 17:41:16 app_cmdline -- common/autotest_common.sh@973 -- # kill 60737 00:06:50.018 17:41:16 app_cmdline -- common/autotest_common.sh@978 -- # wait 60737 00:06:52.548 00:06:52.548 real 0m4.464s 00:06:52.548 user 0m4.646s 00:06:52.548 sys 0m0.662s 00:06:52.548 17:41:19 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.548 ************************************ 00:06:52.548 END TEST app_cmdline 00:06:52.548 ************************************ 00:06:52.548 17:41:19 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 17:41:19 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.548 17:41:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:52.548 17:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.548 17:41:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.548 ************************************ 00:06:52.548 START TEST version 00:06:52.548 ************************************ 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:52.548 * Looking for test storage... 00:06:52.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.548 17:41:19 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.548 17:41:19 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.548 17:41:19 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.548 17:41:19 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.548 17:41:19 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.548 17:41:19 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.548 17:41:19 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.548 17:41:19 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.548 17:41:19 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.548 17:41:19 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.548 17:41:19 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.548 17:41:19 version -- scripts/common.sh@344 -- # case "$op" in 00:06:52.548 17:41:19 version -- scripts/common.sh@345 -- # : 1 00:06:52.548 17:41:19 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.548 17:41:19 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.548 17:41:19 version -- scripts/common.sh@365 -- # decimal 1 00:06:52.548 17:41:19 version -- scripts/common.sh@353 -- # local d=1 00:06:52.548 17:41:19 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.548 17:41:19 version -- scripts/common.sh@355 -- # echo 1 00:06:52.548 17:41:19 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.548 17:41:19 version -- scripts/common.sh@366 -- # decimal 2 00:06:52.548 17:41:19 version -- scripts/common.sh@353 -- # local d=2 00:06:52.548 17:41:19 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.548 17:41:19 version -- scripts/common.sh@355 -- # echo 2 00:06:52.548 17:41:19 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.548 17:41:19 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.548 17:41:19 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.548 17:41:19 version -- scripts/common.sh@368 -- # return 0 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:52.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.548 --rc genhtml_branch_coverage=1 00:06:52.548 --rc genhtml_function_coverage=1 00:06:52.548 --rc genhtml_legend=1 00:06:52.548 --rc geninfo_all_blocks=1 00:06:52.548 --rc geninfo_unexecuted_blocks=1 00:06:52.548 00:06:52.548 ' 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:52.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.548 --rc genhtml_branch_coverage=1 00:06:52.548 --rc genhtml_function_coverage=1 00:06:52.548 --rc genhtml_legend=1 00:06:52.548 --rc geninfo_all_blocks=1 00:06:52.548 --rc geninfo_unexecuted_blocks=1 00:06:52.548 00:06:52.548 ' 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:52.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.548 --rc genhtml_branch_coverage=1 00:06:52.548 --rc genhtml_function_coverage=1 00:06:52.548 --rc genhtml_legend=1 00:06:52.548 --rc geninfo_all_blocks=1 00:06:52.548 --rc geninfo_unexecuted_blocks=1 00:06:52.548 00:06:52.548 ' 00:06:52.548 17:41:19 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:52.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.548 --rc genhtml_branch_coverage=1 00:06:52.548 --rc genhtml_function_coverage=1 00:06:52.548 --rc genhtml_legend=1 00:06:52.548 --rc geninfo_all_blocks=1 00:06:52.548 --rc geninfo_unexecuted_blocks=1 00:06:52.548 00:06:52.548 ' 00:06:52.548 17:41:19 version -- app/version.sh@17 -- # get_header_version major 00:06:52.548 17:41:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # cut -f2 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.548 17:41:19 version -- app/version.sh@17 -- # major=25 00:06:52.548 17:41:19 version -- app/version.sh@18 -- # get_header_version minor 00:06:52.548 17:41:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # cut -f2 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.548 17:41:19 version -- app/version.sh@18 -- # minor=1 00:06:52.548 17:41:19 version -- app/version.sh@19 -- # get_header_version patch 00:06:52.548 17:41:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # cut -f2 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.548 17:41:19 version -- app/version.sh@19 -- # patch=0 00:06:52.548 17:41:19 version -- app/version.sh@20 -- # get_header_version suffix 00:06:52.548 17:41:19 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # tr -d '"' 00:06:52.548 17:41:19 version -- app/version.sh@14 -- # cut -f2 00:06:52.548 17:41:19 version -- app/version.sh@20 -- # suffix=-pre 00:06:52.548 17:41:19 version -- app/version.sh@22 -- # version=25.1 00:06:52.548 17:41:19 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:52.548 17:41:19 version -- app/version.sh@28 -- # version=25.1rc0 00:06:52.548 17:41:19 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:52.548 17:41:19 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:52.548 17:41:19 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:52.548 17:41:19 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:52.548 00:06:52.548 real 0m0.280s 00:06:52.549 user 0m0.164s 00:06:52.549 sys 0m0.181s 00:06:52.549 17:41:19 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:52.549 17:41:19 version -- common/autotest_common.sh@10 -- # set +x 00:06:52.549 ************************************ 00:06:52.549 END TEST version 00:06:52.549 ************************************ 00:06:52.808 17:41:19 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:52.808 17:41:19 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:52.808 17:41:19 -- spdk/autotest.sh@194 -- # uname -s 00:06:52.808 17:41:19 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:52.808 17:41:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.808 17:41:19 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:52.808 17:41:19 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:52.808 17:41:19 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:52.808 17:41:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:52.808 17:41:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:52.808 17:41:19 -- common/autotest_common.sh@10 -- # set +x 00:06:52.808 ************************************ 00:06:52.808 START TEST blockdev_nvme 00:06:52.808 ************************************ 00:06:52.808 17:41:19 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:52.808 * Looking for test storage... 00:06:52.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:52.808 17:41:19 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:52.808 17:41:19 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:06:52.808 17:41:19 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:52.808 17:41:19 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:52.808 17:41:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:53.067 17:41:19 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:53.067 17:41:19 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:53.067 17:41:19 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:53.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.067 --rc genhtml_branch_coverage=1 00:06:53.067 --rc genhtml_function_coverage=1 00:06:53.067 --rc genhtml_legend=1 00:06:53.067 --rc geninfo_all_blocks=1 00:06:53.067 --rc geninfo_unexecuted_blocks=1 00:06:53.067 00:06:53.067 ' 00:06:53.067 17:41:19 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:53.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.068 --rc genhtml_branch_coverage=1 00:06:53.068 --rc genhtml_function_coverage=1 00:06:53.068 --rc genhtml_legend=1 00:06:53.068 --rc geninfo_all_blocks=1 00:06:53.068 --rc geninfo_unexecuted_blocks=1 00:06:53.068 00:06:53.068 ' 00:06:53.068 17:41:19 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:53.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.068 --rc genhtml_branch_coverage=1 00:06:53.068 --rc genhtml_function_coverage=1 00:06:53.068 --rc genhtml_legend=1 00:06:53.068 --rc geninfo_all_blocks=1 00:06:53.068 --rc geninfo_unexecuted_blocks=1 00:06:53.068 00:06:53.068 ' 00:06:53.068 17:41:19 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:53.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:53.068 --rc genhtml_branch_coverage=1 00:06:53.068 --rc genhtml_function_coverage=1 00:06:53.068 --rc genhtml_legend=1 00:06:53.068 --rc geninfo_all_blocks=1 00:06:53.068 --rc geninfo_unexecuted_blocks=1 00:06:53.068 00:06:53.068 ' 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:53.068 17:41:19 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:53.068 17:41:19 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60926 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60926 00:06:53.068 17:41:20 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 60926 ']' 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.068 17:41:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:53.068 [2024-11-20 17:41:20.139550] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:53.068 [2024-11-20 17:41:20.139709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:06:53.328 [2024-11-20 17:41:20.328009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.328 [2024-11-20 17:41:20.450519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.263 17:41:21 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:54.263 17:41:21 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:06:54.263 17:41:21 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:54.263 17:41:21 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:54.263 17:41:21 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:54.263 17:41:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:54.263 17:41:21 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:54.521 17:41:21 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:54.521 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.521 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:54.780 17:41:21 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.780 17:41:21 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:55.039 17:41:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:55.040 17:41:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "00f8d230-7bee-4343-bd92-6d0a1b87a063"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "00f8d230-7bee-4343-bd92-6d0a1b87a063",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "6f537bce-a01b-439d-9ab6-8d6c51f95ade"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "6f537bce-a01b-439d-9ab6-8d6c51f95ade",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "3c9ff373-c321-4f83-bcff-2cf521c2f8bf"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "3c9ff373-c321-4f83-bcff-2cf521c2f8bf",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "e1d012ca-1bb1-466c-8a3e-54303a0c449a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e1d012ca-1bb1-466c-8a3e-54303a0c449a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5e7b5305-e67b-4108-9bc8-5866e5fb7f8c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5e7b5305-e67b-4108-9bc8-5866e5fb7f8c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "53961f3d-a3a6-4732-a3f8-b1348990278e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "53961f3d-a3a6-4732-a3f8-b1348990278e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:55.040 17:41:21 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:55.040 17:41:22 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:55.040 17:41:22 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:55.040 17:41:22 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:55.040 17:41:22 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60926 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 60926 ']' 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 60926 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60926 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:55.040 killing process with pid 60926 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60926' 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 60926 00:06:55.040 17:41:22 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 60926 00:06:57.632 17:41:24 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:57.632 17:41:24 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.632 17:41:24 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:57.632 17:41:24 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.632 17:41:24 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:57.632 ************************************ 00:06:57.632 START TEST bdev_hello_world 00:06:57.632 ************************************ 00:06:57.632 17:41:24 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:57.632 [2024-11-20 17:41:24.561127] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:57.632 [2024-11-20 17:41:24.561269] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61026 ] 00:06:57.632 [2024-11-20 17:41:24.743624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.891 [2024-11-20 17:41:24.866718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.459 [2024-11-20 17:41:25.538764] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:58.459 [2024-11-20 17:41:25.538844] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:58.459 [2024-11-20 17:41:25.538891] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:58.459 [2024-11-20 17:41:25.542002] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:58.459 [2024-11-20 17:41:25.542745] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:58.459 [2024-11-20 17:41:25.542797] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:58.459 [2024-11-20 17:41:25.543071] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:58.459 00:06:58.459 [2024-11-20 17:41:25.543107] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:59.836 00:06:59.836 real 0m2.231s 00:06:59.836 user 0m1.858s 00:06:59.836 sys 0m0.266s 00:06:59.836 17:41:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.836 17:41:26 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 ************************************ 00:06:59.836 END TEST bdev_hello_world 00:06:59.836 ************************************ 00:06:59.836 17:41:26 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:59.836 17:41:26 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:59.836 17:41:26 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.836 17:41:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 ************************************ 00:06:59.836 START TEST bdev_bounds 00:06:59.836 ************************************ 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61069 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:59.836 Process bdevio pid: 61069 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61069' 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61069 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61069 ']' 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.836 17:41:26 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:59.836 [2024-11-20 17:41:26.876725] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:06:59.836 [2024-11-20 17:41:26.876895] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61069 ] 00:07:00.094 [2024-11-20 17:41:27.067590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:00.094 [2024-11-20 17:41:27.189949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.094 [2024-11-20 17:41:27.190095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.094 [2024-11-20 17:41:27.190125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.032 17:41:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.032 17:41:27 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:01.032 17:41:27 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:01.032 I/O targets: 00:07:01.032 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:01.032 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:01.032 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.032 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.032 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:01.032 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:01.032 00:07:01.032 00:07:01.032 CUnit - A unit testing framework for C - Version 2.1-3 00:07:01.032 http://cunit.sourceforge.net/ 00:07:01.032 00:07:01.032 00:07:01.032 Suite: bdevio tests on: Nvme3n1 00:07:01.032 Test: blockdev write read block ...passed 00:07:01.032 Test: blockdev write zeroes read block ...passed 00:07:01.032 Test: blockdev write zeroes read no split ...passed 00:07:01.032 Test: blockdev write zeroes read split ...passed 00:07:01.032 Test: blockdev write zeroes read split partial ...passed 00:07:01.032 Test: blockdev reset ...[2024-11-20 17:41:28.062184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:01.032 [2024-11-20 17:41:28.066374] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:01.032 passed 00:07:01.032 Test: blockdev write read 8 blocks ...passed 00:07:01.032 Test: blockdev write read size > 128k ...passed 00:07:01.032 Test: blockdev write read invalid size ...passed 00:07:01.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.032 Test: blockdev write read max offset ...passed 00:07:01.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.032 Test: blockdev writev readv 8 blocks ...passed 00:07:01.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.032 Test: blockdev writev readv block ...passed 00:07:01.032 Test: blockdev writev readv size > 128k ...passed 00:07:01.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.032 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.075516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b7e0a000 len:0x1000 00:07:01.032 [2024-11-20 17:41:28.075565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.032 passed 00:07:01.032 Test: blockdev nvme passthru rw ...passed 00:07:01.032 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.076657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.032 [2024-11-20 17:41:28.076692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.032 passed 00:07:01.032 Test: blockdev nvme admin passthru ...passed 00:07:01.032 Test: blockdev copy ...passed 00:07:01.032 Suite: bdevio tests on: Nvme2n3 00:07:01.032 Test: blockdev write read block ...passed 00:07:01.032 Test: blockdev write zeroes read block ...passed 00:07:01.032 Test: blockdev write zeroes read no split ...passed 00:07:01.032 Test: blockdev write zeroes read split ...passed 00:07:01.032 Test: blockdev write zeroes read split partial ...passed 00:07:01.032 Test: blockdev reset ...[2024-11-20 17:41:28.160606] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.032 passed 00:07:01.032 Test: blockdev write read 8 blocks ...[2024-11-20 17:41:28.164664] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.032 passed 00:07:01.032 Test: blockdev write read size > 128k ...passed 00:07:01.032 Test: blockdev write read invalid size ...passed 00:07:01.032 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.032 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.032 Test: blockdev write read max offset ...passed 00:07:01.032 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.032 Test: blockdev writev readv 8 blocks ...passed 00:07:01.032 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.032 Test: blockdev writev readv block ...passed 00:07:01.032 Test: blockdev writev readv size > 128k ...passed 00:07:01.032 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.032 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.173163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29a806000 len:0x1000 00:07:01.032 [2024-11-20 17:41:28.173209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.032 passed 00:07:01.032 Test: blockdev nvme passthru rw ...passed 00:07:01.032 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.174109] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.032 [2024-11-20 17:41:28.174140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.032 passed 00:07:01.032 Test: blockdev nvme admin passthru ...passed 00:07:01.032 Test: blockdev copy ...passed 00:07:01.033 Suite: bdevio tests on: Nvme2n2 00:07:01.033 Test: blockdev write read block ...passed 00:07:01.033 Test: blockdev write zeroes read block ...passed 00:07:01.033 Test: blockdev write zeroes read no split ...passed 00:07:01.292 Test: blockdev write zeroes read split ...passed 00:07:01.292 Test: blockdev write zeroes read split partial ...passed 00:07:01.292 Test: blockdev reset ...[2024-11-20 17:41:28.252194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.292 passed 00:07:01.292 Test: blockdev write read 8 blocks ...[2024-11-20 17:41:28.256194] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.292 passed 00:07:01.292 Test: blockdev write read size > 128k ...passed 00:07:01.292 Test: blockdev write read invalid size ...passed 00:07:01.292 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.292 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.292 Test: blockdev write read max offset ...passed 00:07:01.292 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.292 Test: blockdev writev readv 8 blocks ...passed 00:07:01.292 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.292 Test: blockdev writev readv block ...passed 00:07:01.292 Test: blockdev writev readv size > 128k ...passed 00:07:01.292 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.292 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.264700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e3c000 len:0x1000 00:07:01.292 [2024-11-20 17:41:28.264745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.292 passed 00:07:01.292 Test: blockdev nvme passthru rw ...passed 00:07:01.292 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.265651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.293 [2024-11-20 17:41:28.265682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.293 passed 00:07:01.293 Test: blockdev nvme admin passthru ...passed 00:07:01.293 Test: blockdev copy ...passed 00:07:01.293 Suite: bdevio tests on: Nvme2n1 00:07:01.293 Test: blockdev write read block ...passed 00:07:01.293 Test: blockdev write zeroes read block ...passed 00:07:01.293 Test: blockdev write zeroes read no split ...passed 00:07:01.293 Test: blockdev write zeroes read split ...passed 00:07:01.293 Test: blockdev write zeroes read split partial ...passed 00:07:01.293 Test: blockdev reset ...[2024-11-20 17:41:28.345014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:01.293 passed 00:07:01.293 Test: blockdev write read 8 blocks ...[2024-11-20 17:41:28.349090] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:01.293 passed 00:07:01.293 Test: blockdev write read size > 128k ...passed 00:07:01.293 Test: blockdev write read invalid size ...passed 00:07:01.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.293 Test: blockdev write read max offset ...passed 00:07:01.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.293 Test: blockdev writev readv 8 blocks ...passed 00:07:01.293 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.293 Test: blockdev writev readv block ...passed 00:07:01.293 Test: blockdev writev readv size > 128k ...passed 00:07:01.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.293 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.357312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e38000 len:0x1000 00:07:01.293 [2024-11-20 17:41:28.357360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.293 passed 00:07:01.293 Test: blockdev nvme passthru rw ...passed 00:07:01.293 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.358333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.293 [2024-11-20 17:41:28.358365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.293 passed 00:07:01.293 Test: blockdev nvme admin passthru ...passed 00:07:01.293 Test: blockdev copy ...passed 00:07:01.293 Suite: bdevio tests on: Nvme1n1 00:07:01.293 Test: blockdev write read block ...passed 00:07:01.293 Test: blockdev write zeroes read block ...passed 00:07:01.293 Test: blockdev write zeroes read no split ...passed 00:07:01.293 Test: blockdev write zeroes read split ...passed 00:07:01.293 Test: blockdev write zeroes read split partial ...passed 00:07:01.293 Test: blockdev reset ...[2024-11-20 17:41:28.437579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:01.293 [2024-11-20 17:41:28.441222] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:01.293 passed 00:07:01.293 Test: blockdev write read 8 blocks ...passed 00:07:01.293 Test: blockdev write read size > 128k ...passed 00:07:01.293 Test: blockdev write read invalid size ...passed 00:07:01.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.293 Test: blockdev write read max offset ...passed 00:07:01.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.293 Test: blockdev writev readv 8 blocks ...passed 00:07:01.293 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.293 Test: blockdev writev readv block ...passed 00:07:01.293 Test: blockdev writev readv size > 128k ...passed 00:07:01.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.293 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.450368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c7e34000 len:0x1000 00:07:01.293 [2024-11-20 17:41:28.450415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:01.293 passed 00:07:01.293 Test: blockdev nvme passthru rw ...passed 00:07:01.293 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.451394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:01.293 [2024-11-20 17:41:28.451428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:01.293 passed 00:07:01.293 Test: blockdev nvme admin passthru ...passed 00:07:01.293 Test: blockdev copy ...passed 00:07:01.293 Suite: bdevio tests on: Nvme0n1 00:07:01.293 Test: blockdev write read block ...passed 00:07:01.293 Test: blockdev write zeroes read block ...passed 00:07:01.552 Test: blockdev write zeroes read no split ...passed 00:07:01.552 Test: blockdev write zeroes read split ...passed 00:07:01.552 Test: blockdev write zeroes read split partial ...passed 00:07:01.552 Test: blockdev reset ...[2024-11-20 17:41:28.532954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:01.552 passed 00:07:01.552 Test: blockdev write read 8 blocks ...[2024-11-20 17:41:28.536465] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:01.552 passed 00:07:01.552 Test: blockdev write read size > 128k ...passed 00:07:01.552 Test: blockdev write read invalid size ...passed 00:07:01.552 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:01.552 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:01.552 Test: blockdev write read max offset ...passed 00:07:01.552 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:01.552 Test: blockdev writev readv 8 blocks ...passed 00:07:01.552 Test: blockdev writev readv 30 x 1block ...passed 00:07:01.552 Test: blockdev writev readv block ...passed 00:07:01.552 Test: blockdev writev readv size > 128k ...passed 00:07:01.552 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:01.552 Test: blockdev comparev and writev ...[2024-11-20 17:41:28.544162] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:01.552 separate metadata which is not supported yet. 00:07:01.552 passed 00:07:01.552 Test: blockdev nvme passthru rw ...passed 00:07:01.552 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:41:28.544750] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:01.552 [2024-11-20 17:41:28.544801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:01.552 passed 00:07:01.552 Test: blockdev nvme admin passthru ...passed 00:07:01.552 Test: blockdev copy ...passed 00:07:01.552 00:07:01.552 Run Summary: Type Total Ran Passed Failed Inactive 00:07:01.552 suites 6 6 n/a 0 0 00:07:01.552 tests 138 138 138 0 0 00:07:01.552 asserts 893 893 893 0 n/a 00:07:01.552 00:07:01.552 Elapsed time = 1.513 seconds 00:07:01.552 0 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61069 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61069 ']' 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61069 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61069 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.552 killing process with pid 61069 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61069' 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61069 00:07:01.552 17:41:28 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61069 00:07:02.933 17:41:29 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:02.933 00:07:02.933 real 0m2.906s 00:07:02.933 user 0m7.374s 00:07:02.933 sys 0m0.438s 00:07:02.933 17:41:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.933 17:41:29 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:02.933 ************************************ 00:07:02.933 END TEST bdev_bounds 00:07:02.934 ************************************ 00:07:02.934 17:41:29 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.934 17:41:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:02.934 17:41:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.934 17:41:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.934 ************************************ 00:07:02.934 START TEST bdev_nbd 00:07:02.934 ************************************ 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61134 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61134 /var/tmp/spdk-nbd.sock 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61134 ']' 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.934 17:41:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:02.934 [2024-11-20 17:41:29.889934] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:02.934 [2024-11-20 17:41:29.890101] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:02.934 [2024-11-20 17:41:30.087419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.192 [2024-11-20 17:41:30.211404] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:03.783 17:41:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.042 1+0 records in 00:07:04.042 1+0 records out 00:07:04.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000811763 s, 5.0 MB/s 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.042 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.302 1+0 records in 00:07:04.302 1+0 records out 00:07:04.302 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536245 s, 7.6 MB/s 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.302 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:04.561 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:04.821 1+0 records in 00:07:04.821 1+0 records out 00:07:04.821 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000615975 s, 6.6 MB/s 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:04.821 17:41:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.081 1+0 records in 00:07:05.081 1+0 records out 00:07:05.081 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000619788 s, 6.6 MB/s 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.081 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.341 1+0 records in 00:07:05.341 1+0 records out 00:07:05.341 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557361 s, 7.3 MB/s 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.341 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:05.601 1+0 records in 00:07:05.601 1+0 records out 00:07:05.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000706881 s, 5.8 MB/s 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:05.601 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd0", 00:07:05.861 "bdev_name": "Nvme0n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd1", 00:07:05.861 "bdev_name": "Nvme1n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd2", 00:07:05.861 "bdev_name": "Nvme2n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd3", 00:07:05.861 "bdev_name": "Nvme2n2" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd4", 00:07:05.861 "bdev_name": "Nvme2n3" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd5", 00:07:05.861 "bdev_name": "Nvme3n1" 00:07:05.861 } 00:07:05.861 ]' 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd0", 00:07:05.861 "bdev_name": "Nvme0n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd1", 00:07:05.861 "bdev_name": "Nvme1n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd2", 00:07:05.861 "bdev_name": "Nvme2n1" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd3", 00:07:05.861 "bdev_name": "Nvme2n2" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd4", 00:07:05.861 "bdev_name": "Nvme2n3" 00:07:05.861 }, 00:07:05.861 { 00:07:05.861 "nbd_device": "/dev/nbd5", 00:07:05.861 "bdev_name": "Nvme3n1" 00:07:05.861 } 00:07:05.861 ]' 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:05.861 17:41:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.121 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.380 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.639 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:06.898 17:41:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:07.157 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:07.158 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:07.158 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:07.158 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.158 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.158 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.417 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.418 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:07.678 /dev/nbd0 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.678 1+0 records in 00:07:07.678 1+0 records out 00:07:07.678 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504609 s, 8.1 MB/s 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.678 17:41:34 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:07.937 /dev/nbd1 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:07.937 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:07.937 1+0 records in 00:07:07.937 1+0 records out 00:07:07.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00078294 s, 5.2 MB/s 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:07.938 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:08.197 /dev/nbd10 00:07:08.197 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:08.197 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.198 1+0 records in 00:07:08.198 1+0 records out 00:07:08.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000656324 s, 6.2 MB/s 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.198 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:08.456 /dev/nbd11 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.456 1+0 records in 00:07:08.456 1+0 records out 00:07:08.456 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000743916 s, 5.5 MB/s 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.456 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:08.715 /dev/nbd12 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.715 1+0 records in 00:07:08.715 1+0 records out 00:07:08.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000778989 s, 5.3 MB/s 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.715 17:41:35 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:08.974 /dev/nbd13 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:08.974 1+0 records in 00:07:08.974 1+0 records out 00:07:08.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000722345 s, 5.7 MB/s 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.974 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd0", 00:07:09.232 "bdev_name": "Nvme0n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd1", 00:07:09.232 "bdev_name": "Nvme1n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd10", 00:07:09.232 "bdev_name": "Nvme2n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd11", 00:07:09.232 "bdev_name": "Nvme2n2" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd12", 00:07:09.232 "bdev_name": "Nvme2n3" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd13", 00:07:09.232 "bdev_name": "Nvme3n1" 00:07:09.232 } 00:07:09.232 ]' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd0", 00:07:09.232 "bdev_name": "Nvme0n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd1", 00:07:09.232 "bdev_name": "Nvme1n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd10", 00:07:09.232 "bdev_name": "Nvme2n1" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd11", 00:07:09.232 "bdev_name": "Nvme2n2" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd12", 00:07:09.232 "bdev_name": "Nvme2n3" 00:07:09.232 }, 00:07:09.232 { 00:07:09.232 "nbd_device": "/dev/nbd13", 00:07:09.232 "bdev_name": "Nvme3n1" 00:07:09.232 } 00:07:09.232 ]' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:09.232 /dev/nbd1 00:07:09.232 /dev/nbd10 00:07:09.232 /dev/nbd11 00:07:09.232 /dev/nbd12 00:07:09.232 /dev/nbd13' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:09.232 /dev/nbd1 00:07:09.232 /dev/nbd10 00:07:09.232 /dev/nbd11 00:07:09.232 /dev/nbd12 00:07:09.232 /dev/nbd13' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:09.232 256+0 records in 00:07:09.232 256+0 records out 00:07:09.232 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00572283 s, 183 MB/s 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.232 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:09.490 256+0 records in 00:07:09.490 256+0 records out 00:07:09.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.124022 s, 8.5 MB/s 00:07:09.490 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.490 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:09.490 256+0 records in 00:07:09.490 256+0 records out 00:07:09.490 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.127406 s, 8.2 MB/s 00:07:09.490 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.490 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:09.749 256+0 records in 00:07:09.749 256+0 records out 00:07:09.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13265 s, 7.9 MB/s 00:07:09.749 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.749 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:09.749 256+0 records in 00:07:09.749 256+0 records out 00:07:09.749 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.125804 s, 8.3 MB/s 00:07:09.749 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:09.749 17:41:36 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:10.008 256+0 records in 00:07:10.008 256+0 records out 00:07:10.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126515 s, 8.3 MB/s 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:10.008 256+0 records in 00:07:10.008 256+0 records out 00:07:10.008 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132149 s, 7.9 MB/s 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.008 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.266 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.525 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:10.784 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:11.043 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.043 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.043 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.043 17:41:37 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.043 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.302 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.562 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:11.821 17:41:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:12.080 malloc_lvol_verify 00:07:12.081 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:12.340 808b7ace-b163-44b9-a88c-7035b54a38bd 00:07:12.340 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:12.599 b70ee78e-bb2d-45d4-afab-ed55cc72dafe 00:07:12.599 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:12.599 /dev/nbd0 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:12.858 mke2fs 1.47.0 (5-Feb-2023) 00:07:12.858 Discarding device blocks: 0/4096 done 00:07:12.858 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:12.858 00:07:12.858 Allocating group tables: 0/1 done 00:07:12.858 Writing inode tables: 0/1 done 00:07:12.858 Creating journal (1024 blocks): done 00:07:12.858 Writing superblocks and filesystem accounting information: 0/1 done 00:07:12.858 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.858 17:41:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61134 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61134 ']' 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61134 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:07:12.858 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61134 00:07:13.117 killing process with pid 61134 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61134' 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61134 00:07:13.117 17:41:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61134 00:07:14.495 ************************************ 00:07:14.495 END TEST bdev_nbd 00:07:14.495 ************************************ 00:07:14.495 17:41:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:14.495 00:07:14.495 real 0m11.543s 00:07:14.495 user 0m15.002s 00:07:14.495 sys 0m4.822s 00:07:14.495 17:41:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.495 17:41:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:14.495 17:41:41 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:14.495 17:41:41 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:14.495 skipping fio tests on NVMe due to multi-ns failures. 00:07:14.495 17:41:41 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:14.495 17:41:41 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.495 17:41:41 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.495 17:41:41 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:14.495 17:41:41 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.495 17:41:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:14.495 ************************************ 00:07:14.495 START TEST bdev_verify 00:07:14.495 ************************************ 00:07:14.495 17:41:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:14.495 [2024-11-20 17:41:41.475058] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:14.495 [2024-11-20 17:41:41.475809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61521 ] 00:07:14.495 [2024-11-20 17:41:41.663397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.756 [2024-11-20 17:41:41.775569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.756 [2024-11-20 17:41:41.775615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.322 Running I/O for 5 seconds... 00:07:17.637 22144.00 IOPS, 86.50 MiB/s [2024-11-20T17:41:45.749Z] 21120.00 IOPS, 82.50 MiB/s [2024-11-20T17:41:46.686Z] 20629.33 IOPS, 80.58 MiB/s [2024-11-20T17:41:47.621Z] 20848.00 IOPS, 81.44 MiB/s [2024-11-20T17:41:47.881Z] 20812.80 IOPS, 81.30 MiB/s 00:07:20.705 Latency(us) 00:07:20.705 [2024-11-20T17:41:47.881Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:20.705 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0xbd0bd 00:07:20.705 Nvme0n1 : 5.04 1701.47 6.65 0.00 0.00 74960.19 15475.97 75379.56 00:07:20.705 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:20.705 Nvme0n1 : 5.06 1708.08 6.67 0.00 0.00 74635.21 8211.74 76642.90 00:07:20.705 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0xa0000 00:07:20.705 Nvme1n1 : 5.04 1701.00 6.64 0.00 0.00 74830.96 16002.36 61482.77 00:07:20.705 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0xa0000 length 0xa0000 00:07:20.705 Nvme1n1 : 5.06 1707.63 6.67 0.00 0.00 74504.96 8422.30 67799.49 00:07:20.705 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0x80000 00:07:20.705 Nvme2n1 : 5.09 1711.54 6.69 0.00 0.00 74280.04 14949.58 56850.51 00:07:20.705 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x80000 length 0x80000 00:07:20.705 Nvme2n1 : 5.08 1713.54 6.69 0.00 0.00 74163.20 16739.32 58113.85 00:07:20.705 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0x80000 00:07:20.705 Nvme2n2 : 5.09 1711.15 6.68 0.00 0.00 74158.05 15054.86 54744.93 00:07:20.705 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x80000 length 0x80000 00:07:20.705 Nvme2n2 : 5.08 1713.04 6.69 0.00 0.00 74041.33 17370.99 54744.93 00:07:20.705 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0x80000 00:07:20.705 Nvme2n3 : 5.09 1710.76 6.68 0.00 0.00 74082.61 13212.48 56429.39 00:07:20.705 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x80000 length 0x80000 00:07:20.705 Nvme2n3 : 5.08 1712.43 6.69 0.00 0.00 73928.94 17265.71 54323.82 00:07:20.705 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x0 length 0x20000 00:07:20.705 Nvme3n1 : 5.09 1710.38 6.68 0.00 0.00 74008.04 12528.17 57692.74 00:07:20.705 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:20.705 Verification LBA range: start 0x20000 length 0x20000 00:07:20.705 Nvme3n1 : 5.08 1711.80 6.69 0.00 0.00 73833.45 13896.79 57271.62 00:07:20.705 [2024-11-20T17:41:47.881Z] =================================================================================================================== 00:07:20.705 [2024-11-20T17:41:47.881Z] Total : 20512.82 80.13 0.00 0.00 74283.73 8211.74 76642.90 00:07:22.084 00:07:22.084 real 0m7.710s 00:07:22.084 user 0m14.243s 00:07:22.084 sys 0m0.317s 00:07:22.084 ************************************ 00:07:22.084 END TEST bdev_verify 00:07:22.084 ************************************ 00:07:22.084 17:41:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.084 17:41:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 17:41:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.084 17:41:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:07:22.084 17:41:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.084 17:41:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.084 ************************************ 00:07:22.084 START TEST bdev_verify_big_io 00:07:22.084 ************************************ 00:07:22.084 17:41:49 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:22.084 [2024-11-20 17:41:49.253205] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:22.084 [2024-11-20 17:41:49.253328] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61630 ] 00:07:22.343 [2024-11-20 17:41:49.437906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:22.601 [2024-11-20 17:41:49.555865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.601 [2024-11-20 17:41:49.555891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.539 Running I/O for 5 seconds... 00:07:28.034 2369.00 IOPS, 148.06 MiB/s [2024-11-20T17:41:56.161Z] 3434.00 IOPS, 214.62 MiB/s [2024-11-20T17:41:56.161Z] 3943.67 IOPS, 246.48 MiB/s 00:07:28.985 Latency(us) 00:07:28.985 [2024-11-20T17:41:56.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:28.985 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.985 Verification LBA range: start 0x0 length 0xbd0b 00:07:28.985 Nvme0n1 : 5.39 178.70 11.17 0.00 0.00 695701.79 20950.46 693997.29 00:07:28.985 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.985 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:28.985 Nvme0n1 : 5.50 174.50 10.91 0.00 0.00 709626.98 13159.84 825385.12 00:07:28.985 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x0 length 0xa000 00:07:28.986 Nvme1n1 : 5.49 186.40 11.65 0.00 0.00 657252.55 86328.55 616512.15 00:07:28.986 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0xa000 length 0xa000 00:07:28.986 Nvme1n1 : 5.59 174.65 10.92 0.00 0.00 693443.71 67799.49 677152.69 00:07:28.986 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x0 length 0x8000 00:07:28.986 Nvme2n1 : 5.56 184.96 11.56 0.00 0.00 642053.10 97277.53 677152.69 00:07:28.986 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x8000 length 0x8000 00:07:28.986 Nvme2n1 : 5.62 168.32 10.52 0.00 0.00 701659.04 86749.66 1091529.72 00:07:28.986 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x0 length 0x8000 00:07:28.986 Nvme2n2 : 5.59 194.70 12.17 0.00 0.00 606171.23 27161.91 646832.42 00:07:28.986 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x8000 length 0x8000 00:07:28.986 Nvme2n2 : 5.64 177.82 11.11 0.00 0.00 653944.93 16634.04 1118481.07 00:07:28.986 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x0 length 0x8000 00:07:28.986 Nvme2n3 : 5.61 200.54 12.53 0.00 0.00 577565.29 18107.94 667045.94 00:07:28.986 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x8000 length 0x8000 00:07:28.986 Nvme2n3 : 5.68 183.73 11.48 0.00 0.00 614522.41 16528.76 1138694.58 00:07:28.986 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x0 length 0x2000 00:07:28.986 Nvme3n1 : 5.63 208.48 13.03 0.00 0.00 544180.09 9106.61 673783.78 00:07:28.986 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:28.986 Verification LBA range: start 0x2000 length 0x2000 00:07:28.986 Nvme3n1 : 5.71 221.72 13.86 0.00 0.00 499894.13 743.53 845598.64 00:07:28.986 [2024-11-20T17:41:56.162Z] =================================================================================================================== 00:07:28.986 [2024-11-20T17:41:56.162Z] Total : 2254.54 140.91 0.00 0.00 627394.96 743.53 1138694.58 00:07:31.550 ************************************ 00:07:31.551 END TEST bdev_verify_big_io 00:07:31.551 ************************************ 00:07:31.551 00:07:31.551 real 0m8.936s 00:07:31.551 user 0m16.669s 00:07:31.551 sys 0m0.336s 00:07:31.551 17:41:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.551 17:41:58 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 17:41:58 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.551 17:41:58 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:31.551 17:41:58 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.551 17:41:58 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:31.551 ************************************ 00:07:31.551 START TEST bdev_write_zeroes 00:07:31.551 ************************************ 00:07:31.551 17:41:58 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:31.551 [2024-11-20 17:41:58.273700] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:31.551 [2024-11-20 17:41:58.274111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61739 ] 00:07:31.551 [2024-11-20 17:41:58.461848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.551 [2024-11-20 17:41:58.568696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.117 Running I/O for 1 seconds... 00:07:33.489 74880.00 IOPS, 292.50 MiB/s 00:07:33.489 Latency(us) 00:07:33.489 [2024-11-20T17:42:00.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.489 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.489 Nvme0n1 : 1.02 12403.43 48.45 0.00 0.00 10295.23 8422.30 28846.37 00:07:33.489 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.490 Nvme1n1 : 1.02 12390.22 48.40 0.00 0.00 10294.05 8685.49 28846.37 00:07:33.490 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.490 Nvme2n1 : 1.02 12377.86 48.35 0.00 0.00 10283.93 8369.66 29056.93 00:07:33.490 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.490 Nvme2n2 : 1.02 12366.37 48.31 0.00 0.00 10204.15 8632.85 21582.14 00:07:33.490 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.490 Nvme2n3 : 1.03 12409.82 48.48 0.00 0.00 10152.51 5764.01 18529.05 00:07:33.490 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:33.490 Nvme3n1 : 1.03 12398.28 48.43 0.00 0.00 10131.06 5737.69 19371.28 00:07:33.490 [2024-11-20T17:42:00.666Z] =================================================================================================================== 00:07:33.490 [2024-11-20T17:42:00.666Z] Total : 74345.98 290.41 0.00 0.00 10226.68 5737.69 29056.93 00:07:34.427 ************************************ 00:07:34.427 END TEST bdev_write_zeroes 00:07:34.427 ************************************ 00:07:34.427 00:07:34.427 real 0m3.322s 00:07:34.427 user 0m2.929s 00:07:34.427 sys 0m0.279s 00:07:34.427 17:42:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.427 17:42:01 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:34.427 17:42:01 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.427 17:42:01 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:34.427 17:42:01 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.427 17:42:01 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:34.427 ************************************ 00:07:34.427 START TEST bdev_json_nonenclosed 00:07:34.427 ************************************ 00:07:34.427 17:42:01 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:34.686 [2024-11-20 17:42:01.673488] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:34.686 [2024-11-20 17:42:01.673628] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61798 ] 00:07:34.945 [2024-11-20 17:42:01.861027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.945 [2024-11-20 17:42:01.977409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.945 [2024-11-20 17:42:01.977505] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:34.945 [2024-11-20 17:42:01.977528] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:34.945 [2024-11-20 17:42:01.977541] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.204 00:07:35.204 real 0m0.669s 00:07:35.204 user 0m0.413s 00:07:35.204 sys 0m0.152s 00:07:35.204 ************************************ 00:07:35.204 END TEST bdev_json_nonenclosed 00:07:35.204 17:42:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.204 17:42:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:35.204 ************************************ 00:07:35.204 17:42:02 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.204 17:42:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:35.204 17:42:02 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.204 17:42:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.204 ************************************ 00:07:35.204 START TEST bdev_json_nonarray 00:07:35.204 ************************************ 00:07:35.204 17:42:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:35.463 [2024-11-20 17:42:02.408251] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:35.463 [2024-11-20 17:42:02.408508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61824 ] 00:07:35.463 [2024-11-20 17:42:02.590416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.722 [2024-11-20 17:42:02.709243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.722 [2024-11-20 17:42:02.709544] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:35.722 [2024-11-20 17:42:02.709575] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:35.722 [2024-11-20 17:42:02.709588] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:35.981 00:07:35.981 real 0m0.654s 00:07:35.981 user 0m0.415s 00:07:35.981 sys 0m0.134s 00:07:35.981 17:42:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.981 ************************************ 00:07:35.981 END TEST bdev_json_nonarray 00:07:35.981 ************************************ 00:07:35.981 17:42:02 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:35.981 17:42:03 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:35.981 ************************************ 00:07:35.981 END TEST blockdev_nvme 00:07:35.981 ************************************ 00:07:35.981 00:07:35.981 real 0m43.273s 00:07:35.981 user 1m3.799s 00:07:35.981 sys 0m7.954s 00:07:35.981 17:42:03 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.981 17:42:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 17:42:03 -- spdk/autotest.sh@209 -- # uname -s 00:07:35.981 17:42:03 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:35.981 17:42:03 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:35.981 17:42:03 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:35.981 17:42:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.981 17:42:03 -- common/autotest_common.sh@10 -- # set +x 00:07:35.981 ************************************ 00:07:35.981 START TEST blockdev_nvme_gpt 00:07:35.981 ************************************ 00:07:35.981 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:36.241 * Looking for test storage... 00:07:36.241 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:36.241 17:42:03 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:36.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.241 --rc genhtml_branch_coverage=1 00:07:36.241 --rc genhtml_function_coverage=1 00:07:36.241 --rc genhtml_legend=1 00:07:36.241 --rc geninfo_all_blocks=1 00:07:36.241 --rc geninfo_unexecuted_blocks=1 00:07:36.241 00:07:36.241 ' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:36.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.241 --rc genhtml_branch_coverage=1 00:07:36.241 --rc genhtml_function_coverage=1 00:07:36.241 --rc genhtml_legend=1 00:07:36.241 --rc geninfo_all_blocks=1 00:07:36.241 --rc geninfo_unexecuted_blocks=1 00:07:36.241 00:07:36.241 ' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:36.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.241 --rc genhtml_branch_coverage=1 00:07:36.241 --rc genhtml_function_coverage=1 00:07:36.241 --rc genhtml_legend=1 00:07:36.241 --rc geninfo_all_blocks=1 00:07:36.241 --rc geninfo_unexecuted_blocks=1 00:07:36.241 00:07:36.241 ' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:36.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:36.241 --rc genhtml_branch_coverage=1 00:07:36.241 --rc genhtml_function_coverage=1 00:07:36.241 --rc genhtml_legend=1 00:07:36.241 --rc geninfo_all_blocks=1 00:07:36.241 --rc geninfo_unexecuted_blocks=1 00:07:36.241 00:07:36.241 ' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61909 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:36.241 17:42:03 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61909 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 61909 ']' 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.241 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.242 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.242 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.242 17:42:03 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:36.500 [2024-11-20 17:42:03.488162] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:36.500 [2024-11-20 17:42:03.488463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61909 ] 00:07:36.500 [2024-11-20 17:42:03.670862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.759 [2024-11-20 17:42:03.782237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.695 17:42:04 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.695 17:42:04 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:07:37.695 17:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:37.695 17:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:37.695 17:42:04 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:38.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:38.523 Waiting for block devices as requested 00:07:38.523 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.523 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.783 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:38.783 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:44.059 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:44.059 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:44.059 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:44.060 BYT; 00:07:44.060 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:44.060 BYT; 00:07:44.060 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.060 17:42:11 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:44.060 17:42:11 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:45.442 The operation has completed successfully. 00:07:45.442 17:42:12 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:46.379 The operation has completed successfully. 00:07:46.379 17:42:13 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:46.946 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:47.514 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.514 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.514 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.514 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:47.774 17:42:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.774 17:42:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:47.774 [] 00:07:47.774 17:42:14 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:47.774 17:42:14 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:47.774 17:42:14 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:47.774 17:42:14 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.033 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.033 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:48.033 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.033 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:48.293 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:48.293 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:48.294 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "88c7b414-0360-464d-9ddd-0d0b94c02243"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "88c7b414-0360-464d-9ddd-0d0b94c02243",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "14d459f2-ef29-4e1d-96b7-a56d971802d1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "14d459f2-ef29-4e1d-96b7-a56d971802d1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "edd272d5-0477-44fa-87df-3a6730e118fe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "edd272d5-0477-44fa-87df-3a6730e118fe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "ab8460e5-7ed8-4ba9-bb71-ce6e56a3fbfd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ab8460e5-7ed8-4ba9-bb71-ce6e56a3fbfd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "9acf17c3-d92a-4496-8555-396b7b34ecfe"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9acf17c3-d92a-4496-8555-396b7b34ecfe",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:48.294 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:48.294 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:48.294 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:48.294 17:42:15 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61909 00:07:48.294 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 61909 ']' 00:07:48.294 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 61909 00:07:48.294 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:07:48.294 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:48.294 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61909 00:07:48.554 killing process with pid 61909 00:07:48.554 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:48.554 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:48.554 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61909' 00:07:48.554 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 61909 00:07:48.554 17:42:15 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 61909 00:07:51.138 17:42:17 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:51.138 17:42:17 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.138 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:07:51.138 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:51.138 17:42:17 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.138 ************************************ 00:07:51.138 START TEST bdev_hello_world 00:07:51.138 ************************************ 00:07:51.138 17:42:17 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:51.138 [2024-11-20 17:42:18.001659] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:51.138 [2024-11-20 17:42:18.001801] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62558 ] 00:07:51.138 [2024-11-20 17:42:18.184037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.138 [2024-11-20 17:42:18.292478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.075 [2024-11-20 17:42:18.949705] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:52.075 [2024-11-20 17:42:18.949754] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:52.075 [2024-11-20 17:42:18.949948] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:52.075 [2024-11-20 17:42:18.953463] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:52.075 [2024-11-20 17:42:18.954121] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:52.075 [2024-11-20 17:42:18.954178] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:52.075 [2024-11-20 17:42:18.954409] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:52.075 00:07:52.075 [2024-11-20 17:42:18.954451] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:53.013 00:07:53.013 real 0m2.197s 00:07:53.013 user 0m1.821s 00:07:53.013 sys 0m0.266s 00:07:53.013 ************************************ 00:07:53.013 END TEST bdev_hello_world 00:07:53.013 ************************************ 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:53.013 17:42:20 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:53.013 17:42:20 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:53.013 17:42:20 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:53.013 17:42:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:53.013 ************************************ 00:07:53.013 START TEST bdev_bounds 00:07:53.013 ************************************ 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62600 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:53.013 Process bdevio pid: 62600 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62600' 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62600 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 62600 ']' 00:07:53.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.013 17:42:20 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:53.273 [2024-11-20 17:42:20.269929] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:53.273 [2024-11-20 17:42:20.270257] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62600 ] 00:07:53.273 [2024-11-20 17:42:20.441968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.532 [2024-11-20 17:42:20.554977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.532 [2024-11-20 17:42:20.555037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.532 [2024-11-20 17:42:20.555072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:54.124 17:42:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.124 17:42:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:07:54.124 17:42:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:54.384 I/O targets: 00:07:54.384 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:54.384 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:54.384 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:54.384 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:54.384 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:54.384 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:54.384 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:54.384 00:07:54.384 00:07:54.384 CUnit - A unit testing framework for C - Version 2.1-3 00:07:54.384 http://cunit.sourceforge.net/ 00:07:54.384 00:07:54.384 00:07:54.384 Suite: bdevio tests on: Nvme3n1 00:07:54.384 Test: blockdev write read block ...passed 00:07:54.384 Test: blockdev write zeroes read block ...passed 00:07:54.384 Test: blockdev write zeroes read no split ...passed 00:07:54.384 Test: blockdev write zeroes read split ...passed 00:07:54.384 Test: blockdev write zeroes read split partial ...passed 00:07:54.384 Test: blockdev reset ...[2024-11-20 17:42:21.409859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:54.384 passed 00:07:54.384 Test: blockdev write read 8 blocks ...[2024-11-20 17:42:21.415039] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:54.384 passed 00:07:54.384 Test: blockdev write read size > 128k ...passed 00:07:54.384 Test: blockdev write read invalid size ...passed 00:07:54.384 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.384 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.384 Test: blockdev write read max offset ...passed 00:07:54.384 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.384 Test: blockdev writev readv 8 blocks ...passed 00:07:54.384 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.384 Test: blockdev writev readv block ...passed 00:07:54.384 Test: blockdev writev readv size > 128k ...passed 00:07:54.384 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.384 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.424504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5604000 len:0x1000 00:07:54.384 [2024-11-20 17:42:21.424554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.384 passed 00:07:54.384 Test: blockdev nvme passthru rw ...passed 00:07:54.384 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.384 Test: blockdev nvme admin passthru ...[2024-11-20 17:42:21.425362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:54.384 [2024-11-20 17:42:21.425401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:54.384 passed 00:07:54.384 Test: blockdev copy ...passed 00:07:54.384 Suite: bdevio tests on: Nvme2n3 00:07:54.384 Test: blockdev write read block ...passed 00:07:54.384 Test: blockdev write zeroes read block ...passed 00:07:54.384 Test: blockdev write zeroes read no split ...passed 00:07:54.384 Test: blockdev write zeroes read split ...passed 00:07:54.384 Test: blockdev write zeroes read split partial ...passed 00:07:54.384 Test: blockdev reset ...[2024-11-20 17:42:21.505758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:54.384 [2024-11-20 17:42:21.510145] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:54.384 Test: blockdev write read 8 blocks ...uccessful. 00:07:54.384 passed 00:07:54.384 Test: blockdev write read size > 128k ...passed 00:07:54.384 Test: blockdev write read invalid size ...passed 00:07:54.384 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.384 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.384 Test: blockdev write read max offset ...passed 00:07:54.384 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.384 Test: blockdev writev readv 8 blocks ...passed 00:07:54.384 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.384 Test: blockdev writev readv block ...passed 00:07:54.385 Test: blockdev writev readv size > 128k ...passed 00:07:54.385 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.385 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.519976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b5602000 len:0x1000 00:07:54.385 [2024-11-20 17:42:21.520143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.385 passed 00:07:54.385 Test: blockdev nvme passthru rw ...passed 00:07:54.385 Test: blockdev nvme passthru vendor specific ...[2024-11-20 17:42:21.521252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:54.385 [2024-11-20 17:42:21.521346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:54.385 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:54.385 passed 00:07:54.385 Test: blockdev copy ...passed 00:07:54.385 Suite: bdevio tests on: Nvme2n2 00:07:54.385 Test: blockdev write read block ...passed 00:07:54.385 Test: blockdev write zeroes read block ...passed 00:07:54.385 Test: blockdev write zeroes read no split ...passed 00:07:54.645 Test: blockdev write zeroes read split ...passed 00:07:54.645 Test: blockdev write zeroes read split partial ...passed 00:07:54.645 Test: blockdev reset ...[2024-11-20 17:42:21.599337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:54.645 [2024-11-20 17:42:21.603517] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:07:54.645 Test: blockdev write read 8 blocks ...uccessful. 00:07:54.645 passed 00:07:54.645 Test: blockdev write read size > 128k ...passed 00:07:54.645 Test: blockdev write read invalid size ...passed 00:07:54.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.645 Test: blockdev write read max offset ...passed 00:07:54.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.645 Test: blockdev writev readv 8 blocks ...passed 00:07:54.645 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.645 Test: blockdev writev readv block ...passed 00:07:54.645 Test: blockdev writev readv size > 128k ...passed 00:07:54.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.645 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.612079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9438000 len:0x1000 00:07:54.645 [2024-11-20 17:42:21.612125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.645 passed 00:07:54.645 Test: blockdev nvme passthru rw ...passed 00:07:54.645 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.645 Test: blockdev nvme admin passthru ...[2024-11-20 17:42:21.612916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:54.645 [2024-11-20 17:42:21.612952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:54.645 passed 00:07:54.645 Test: blockdev copy ...passed 00:07:54.645 Suite: bdevio tests on: Nvme2n1 00:07:54.645 Test: blockdev write read block ...passed 00:07:54.645 Test: blockdev write zeroes read block ...passed 00:07:54.645 Test: blockdev write zeroes read no split ...passed 00:07:54.645 Test: blockdev write zeroes read split ...passed 00:07:54.645 Test: blockdev write zeroes read split partial ...passed 00:07:54.645 Test: blockdev reset ...[2024-11-20 17:42:21.694476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:54.645 passed 00:07:54.645 Test: blockdev write read 8 blocks ...[2024-11-20 17:42:21.698505] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:54.645 passed 00:07:54.645 Test: blockdev write read size > 128k ...passed 00:07:54.645 Test: blockdev write read invalid size ...passed 00:07:54.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.645 Test: blockdev write read max offset ...passed 00:07:54.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.645 Test: blockdev writev readv 8 blocks ...passed 00:07:54.645 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.645 Test: blockdev writev readv block ...passed 00:07:54.645 Test: blockdev writev readv size > 128k ...passed 00:07:54.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.645 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.707100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:54.645 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2c9434000 len:0x1000 00:07:54.645 [2024-11-20 17:42:21.707269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.645 passed 00:07:54.645 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.645 Test: blockdev nvme admin passthru ...[2024-11-20 17:42:21.708027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:54.645 [2024-11-20 17:42:21.708067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:54.645 passed 00:07:54.645 Test: blockdev copy ...passed 00:07:54.645 Suite: bdevio tests on: Nvme1n1p2 00:07:54.645 Test: blockdev write read block ...passed 00:07:54.645 Test: blockdev write zeroes read block ...passed 00:07:54.645 Test: blockdev write zeroes read no split ...passed 00:07:54.645 Test: blockdev write zeroes read split ...passed 00:07:54.645 Test: blockdev write zeroes read split partial ...passed 00:07:54.645 Test: blockdev reset ...[2024-11-20 17:42:21.788869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:54.645 passed 00:07:54.645 Test: blockdev write read 8 blocks ...[2024-11-20 17:42:21.792837] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:54.645 passed 00:07:54.645 Test: blockdev write read size > 128k ...passed 00:07:54.645 Test: blockdev write read invalid size ...passed 00:07:54.645 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.645 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.645 Test: blockdev write read max offset ...passed 00:07:54.645 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.645 Test: blockdev writev readv 8 blocks ...passed 00:07:54.645 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.645 Test: blockdev writev readv block ...passed 00:07:54.645 Test: blockdev writev readv size > 128k ...passed 00:07:54.645 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.645 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.802288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c9430000 len:0x1000 00:07:54.645 [2024-11-20 17:42:21.802347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.645 passed 00:07:54.645 Test: blockdev nvme passthru rw ...passed 00:07:54.645 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.645 Test: blockdev nvme admin passthru ...passed 00:07:54.645 Test: blockdev copy ...passed 00:07:54.645 Suite: bdevio tests on: Nvme1n1p1 00:07:54.645 Test: blockdev write read block ...passed 00:07:54.645 Test: blockdev write zeroes read block ...passed 00:07:54.645 Test: blockdev write zeroes read no split ...passed 00:07:54.904 Test: blockdev write zeroes read split ...passed 00:07:54.904 Test: blockdev write zeroes read split partial ...passed 00:07:54.904 Test: blockdev reset ...[2024-11-20 17:42:21.873527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:54.904 [2024-11-20 17:42:21.877349] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:07:54.904 Test: blockdev write read 8 blocks ...uccessful. 00:07:54.904 passed 00:07:54.904 Test: blockdev write read size > 128k ...passed 00:07:54.904 Test: blockdev write read invalid size ...passed 00:07:54.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.904 Test: blockdev write read max offset ...passed 00:07:54.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.904 Test: blockdev writev readv 8 blocks ...passed 00:07:54.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.904 Test: blockdev writev readv block ...passed 00:07:54.904 Test: blockdev writev readv size > 128k ...passed 00:07:54.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.904 Test: blockdev comparev and writev ...[2024-11-20 17:42:21.887303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:passed 00:07:54.904 Test: blockdev nvme passthru rw ...passed 00:07:54.904 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.904 Test: blockdev nvme admin passthru ...passed 00:07:54.904 Test: blockdev copy ...1 SGL DATA BLOCK ADDRESS 0x2b580e000 len:0x1000 00:07:54.904 [2024-11-20 17:42:21.887473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:54.904 passed 00:07:54.904 Suite: bdevio tests on: Nvme0n1 00:07:54.904 Test: blockdev write read block ...passed 00:07:54.904 Test: blockdev write zeroes read block ...passed 00:07:54.904 Test: blockdev write zeroes read no split ...passed 00:07:54.904 Test: blockdev write zeroes read split ...passed 00:07:54.904 Test: blockdev write zeroes read split partial ...passed 00:07:54.904 Test: blockdev reset ...[2024-11-20 17:42:21.960196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:54.904 [2024-11-20 17:42:21.964093] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spasseduccessful. 00:07:54.904 00:07:54.904 Test: blockdev write read 8 blocks ...passed 00:07:54.904 Test: blockdev write read size > 128k ...passed 00:07:54.904 Test: blockdev write read invalid size ...passed 00:07:54.904 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:54.904 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:54.904 Test: blockdev write read max offset ...passed 00:07:54.904 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:54.904 Test: blockdev writev readv 8 blocks ...passed 00:07:54.904 Test: blockdev writev readv 30 x 1block ...passed 00:07:54.904 Test: blockdev writev readv block ...passed 00:07:54.904 Test: blockdev writev readv size > 128k ...passed 00:07:54.904 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:54.904 Test: blockdev comparev and writev ...passed 00:07:54.904 Test: blockdev nvme passthru rw ...[2024-11-20 17:42:21.972363] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:54.904 separate metadata which is not supported yet. 00:07:54.904 passed 00:07:54.904 Test: blockdev nvme passthru vendor specific ...passed 00:07:54.904 Test: blockdev nvme admin passthru ...[2024-11-20 17:42:21.972970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:54.904 [2024-11-20 17:42:21.973019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:54.904 passed 00:07:54.904 Test: blockdev copy ...passed 00:07:54.904 00:07:54.904 Run Summary: Type Total Ran Passed Failed Inactive 00:07:54.904 suites 7 7 n/a 0 0 00:07:54.904 tests 161 161 161 0 0 00:07:54.904 asserts 1025 1025 1025 0 n/a 00:07:54.904 00:07:54.904 Elapsed time = 1.730 seconds 00:07:54.904 0 00:07:54.904 17:42:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62600 00:07:54.904 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 62600 ']' 00:07:54.904 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 62600 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62600 00:07:54.905 killing process with pid 62600 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62600' 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 62600 00:07:54.905 17:42:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 62600 00:07:56.280 ************************************ 00:07:56.280 END TEST bdev_bounds 00:07:56.280 ************************************ 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:56.280 00:07:56.280 real 0m2.935s 00:07:56.280 user 0m7.509s 00:07:56.280 sys 0m0.423s 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:56.280 17:42:23 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.280 17:42:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:56.280 17:42:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.280 17:42:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.280 ************************************ 00:07:56.280 START TEST bdev_nbd 00:07:56.280 ************************************ 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62665 00:07:56.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62665 /var/tmp/spdk-nbd.sock 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 62665 ']' 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:56.280 17:42:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:56.280 [2024-11-20 17:42:23.275255] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:07:56.280 [2024-11-20 17:42:23.275377] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.539 [2024-11-20 17:42:23.459232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.539 [2024-11-20 17:42:23.573421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.103 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.669 1+0 records in 00:07:57.669 1+0 records out 00:07:57.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000679103 s, 6.0 MB/s 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.669 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:57.928 1+0 records in 00:07:57.928 1+0 records out 00:07:57.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756304 s, 5.4 MB/s 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:57.928 17:42:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.186 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.187 1+0 records in 00:07:58.187 1+0 records out 00:07:58.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573849 s, 7.1 MB/s 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.187 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.448 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.448 1+0 records in 00:07:58.448 1+0 records out 00:07:58.448 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000695274 s, 5.9 MB/s 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.449 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.707 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.708 1+0 records in 00:07:58.708 1+0 records out 00:07:58.708 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000798838 s, 5.1 MB/s 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.708 17:42:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:58.966 1+0 records in 00:07:58.966 1+0 records out 00:07:58.966 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00141236 s, 2.9 MB/s 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:58.966 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:59.224 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:59.225 1+0 records in 00:07:59.225 1+0 records out 00:07:59.225 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000814203 s, 5.0 MB/s 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:59.225 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd0", 00:07:59.483 "bdev_name": "Nvme0n1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd1", 00:07:59.483 "bdev_name": "Nvme1n1p1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd2", 00:07:59.483 "bdev_name": "Nvme1n1p2" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd3", 00:07:59.483 "bdev_name": "Nvme2n1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd4", 00:07:59.483 "bdev_name": "Nvme2n2" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd5", 00:07:59.483 "bdev_name": "Nvme2n3" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd6", 00:07:59.483 "bdev_name": "Nvme3n1" 00:07:59.483 } 00:07:59.483 ]' 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd0", 00:07:59.483 "bdev_name": "Nvme0n1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd1", 00:07:59.483 "bdev_name": "Nvme1n1p1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd2", 00:07:59.483 "bdev_name": "Nvme1n1p2" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd3", 00:07:59.483 "bdev_name": "Nvme2n1" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd4", 00:07:59.483 "bdev_name": "Nvme2n2" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd5", 00:07:59.483 "bdev_name": "Nvme2n3" 00:07:59.483 }, 00:07:59.483 { 00:07:59.483 "nbd_device": "/dev/nbd6", 00:07:59.483 "bdev_name": "Nvme3n1" 00:07:59.483 } 00:07:59.483 ]' 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.483 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.742 17:42:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.001 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.260 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.629 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.919 17:42:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.920 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.178 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:01.436 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.437 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:01.694 /dev/nbd0 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.694 1+0 records in 00:08:01.694 1+0 records out 00:08:01.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000550771 s, 7.4 MB/s 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:01.694 17:42:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:01.953 /dev/nbd1 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:01.953 1+0 records in 00:08:01.953 1+0 records out 00:08:01.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000671879 s, 6.1 MB/s 00:08:01.953 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:02.210 /dev/nbd10 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.210 1+0 records in 00:08:02.210 1+0 records out 00:08:02.210 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000771645 s, 5.3 MB/s 00:08:02.210 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:02.469 /dev/nbd11 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.469 1+0 records in 00:08:02.469 1+0 records out 00:08:02.469 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678905 s, 6.0 MB/s 00:08:02.469 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:02.727 /dev/nbd12 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:02.727 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:02.985 1+0 records in 00:08:02.985 1+0 records out 00:08:02.985 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756504 s, 5.4 MB/s 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:02.985 17:42:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:02.985 /dev/nbd13 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.244 1+0 records in 00:08:03.244 1+0 records out 00:08:03.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000903914 s, 4.5 MB/s 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:03.244 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:03.503 /dev/nbd14 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:03.503 1+0 records in 00:08:03.503 1+0 records out 00:08:03.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000894245 s, 4.6 MB/s 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:03.503 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:03.762 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd0", 00:08:03.762 "bdev_name": "Nvme0n1" 00:08:03.762 }, 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd1", 00:08:03.762 "bdev_name": "Nvme1n1p1" 00:08:03.762 }, 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd10", 00:08:03.762 "bdev_name": "Nvme1n1p2" 00:08:03.762 }, 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd11", 00:08:03.762 "bdev_name": "Nvme2n1" 00:08:03.762 }, 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd12", 00:08:03.762 "bdev_name": "Nvme2n2" 00:08:03.762 }, 00:08:03.762 { 00:08:03.762 "nbd_device": "/dev/nbd13", 00:08:03.762 "bdev_name": "Nvme2n3" 00:08:03.762 }, 00:08:03.762 { 00:08:03.763 "nbd_device": "/dev/nbd14", 00:08:03.763 "bdev_name": "Nvme3n1" 00:08:03.763 } 00:08:03.763 ]' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd0", 00:08:03.763 "bdev_name": "Nvme0n1" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd1", 00:08:03.763 "bdev_name": "Nvme1n1p1" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd10", 00:08:03.763 "bdev_name": "Nvme1n1p2" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd11", 00:08:03.763 "bdev_name": "Nvme2n1" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd12", 00:08:03.763 "bdev_name": "Nvme2n2" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd13", 00:08:03.763 "bdev_name": "Nvme2n3" 00:08:03.763 }, 00:08:03.763 { 00:08:03.763 "nbd_device": "/dev/nbd14", 00:08:03.763 "bdev_name": "Nvme3n1" 00:08:03.763 } 00:08:03.763 ]' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:03.763 /dev/nbd1 00:08:03.763 /dev/nbd10 00:08:03.763 /dev/nbd11 00:08:03.763 /dev/nbd12 00:08:03.763 /dev/nbd13 00:08:03.763 /dev/nbd14' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:03.763 /dev/nbd1 00:08:03.763 /dev/nbd10 00:08:03.763 /dev/nbd11 00:08:03.763 /dev/nbd12 00:08:03.763 /dev/nbd13 00:08:03.763 /dev/nbd14' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:03.763 256+0 records in 00:08:03.763 256+0 records out 00:08:03.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138421 s, 75.8 MB/s 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:03.763 256+0 records in 00:08:03.763 256+0 records out 00:08:03.763 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.132763 s, 7.9 MB/s 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:03.763 17:42:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:04.023 256+0 records in 00:08:04.023 256+0 records out 00:08:04.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.139116 s, 7.5 MB/s 00:08:04.023 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.023 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:04.283 256+0 records in 00:08:04.283 256+0 records out 00:08:04.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137159 s, 7.6 MB/s 00:08:04.283 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.283 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:04.283 256+0 records in 00:08:04.283 256+0 records out 00:08:04.283 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.141625 s, 7.4 MB/s 00:08:04.283 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.283 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:04.542 256+0 records in 00:08:04.542 256+0 records out 00:08:04.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133594 s, 7.8 MB/s 00:08:04.542 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.542 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:04.542 256+0 records in 00:08:04.542 256+0 records out 00:08:04.542 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.137177 s, 7.6 MB/s 00:08:04.542 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:04.542 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:04.801 256+0 records in 00:08:04.801 256+0 records out 00:08:04.801 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.136 s, 7.7 MB/s 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:04.801 17:42:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.061 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.320 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:05.579 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:05.579 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:05.579 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:05.579 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.580 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:05.839 17:42:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.099 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.359 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:06.618 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:06.619 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:06.619 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:06.619 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:06.619 17:42:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:06.878 malloc_lvol_verify 00:08:06.878 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:07.137 e0730dfe-faaf-407e-b850-0b2818735c3d 00:08:07.137 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:07.395 9219f040-14f4-446e-9b79-6e57c63d75b0 00:08:07.395 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:07.724 /dev/nbd0 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:07.724 mke2fs 1.47.0 (5-Feb-2023) 00:08:07.724 Discarding device blocks: 0/4096 done 00:08:07.724 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:07.724 00:08:07.724 Allocating group tables: 0/1 done 00:08:07.724 Writing inode tables: 0/1 done 00:08:07.724 Creating journal (1024 blocks): done 00:08:07.724 Writing superblocks and filesystem accounting information: 0/1 done 00:08:07.724 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:07.724 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62665 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 62665 ']' 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 62665 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62665 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:08.002 killing process with pid 62665 00:08:08.002 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62665' 00:08:08.003 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 62665 00:08:08.003 17:42:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 62665 00:08:09.379 17:42:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:09.379 00:08:09.379 real 0m13.011s 00:08:09.379 user 0m16.845s 00:08:09.379 sys 0m5.541s 00:08:09.379 17:42:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.379 ************************************ 00:08:09.379 END TEST bdev_nbd 00:08:09.379 ************************************ 00:08:09.379 17:42:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:09.379 skipping fio tests on NVMe due to multi-ns failures. 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:09.379 17:42:36 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.379 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:09.379 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:09.379 17:42:36 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:09.379 ************************************ 00:08:09.379 START TEST bdev_verify 00:08:09.379 ************************************ 00:08:09.379 17:42:36 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:09.379 [2024-11-20 17:42:36.343093] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:09.379 [2024-11-20 17:42:36.343210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63090 ] 00:08:09.379 [2024-11-20 17:42:36.523407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:09.638 [2024-11-20 17:42:36.641705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.638 [2024-11-20 17:42:36.641735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.207 Running I/O for 5 seconds... 00:08:12.522 21760.00 IOPS, 85.00 MiB/s [2024-11-20T17:42:40.635Z] 22272.00 IOPS, 87.00 MiB/s [2024-11-20T17:42:41.569Z] 22400.00 IOPS, 87.50 MiB/s [2024-11-20T17:42:42.504Z] 22176.00 IOPS, 86.62 MiB/s [2024-11-20T17:42:42.504Z] 22374.40 IOPS, 87.40 MiB/s 00:08:15.328 Latency(us) 00:08:15.328 [2024-11-20T17:42:42.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:15.328 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0xbd0bd 00:08:15.328 Nvme0n1 : 5.06 1593.30 6.22 0.00 0.00 80057.20 19687.12 95171.96 00:08:15.328 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:15.328 Nvme0n1 : 5.06 1543.70 6.03 0.00 0.00 82625.73 18634.33 95593.07 00:08:15.328 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x4ff80 00:08:15.328 Nvme1n1p1 : 5.06 1592.53 6.22 0.00 0.00 79959.33 21582.14 88855.24 00:08:15.328 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:15.328 Nvme1n1p1 : 5.06 1543.25 6.03 0.00 0.00 82522.72 21371.58 90118.58 00:08:15.328 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x4ff7f 00:08:15.328 Nvme1n1p2 : 5.06 1592.10 6.22 0.00 0.00 79705.45 20318.79 77485.13 00:08:15.328 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:15.328 Nvme1n1p2 : 5.06 1542.77 6.03 0.00 0.00 82268.87 23477.15 78748.48 00:08:15.328 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x80000 00:08:15.328 Nvme2n1 : 5.09 1596.86 6.24 0.00 0.00 79299.37 13159.84 66957.26 00:08:15.328 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x80000 length 0x80000 00:08:15.328 Nvme2n1 : 5.09 1547.14 6.04 0.00 0.00 81842.83 11212.18 68220.61 00:08:15.328 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x80000 00:08:15.328 Nvme2n2 : 5.09 1596.03 6.23 0.00 0.00 79185.19 14739.02 66115.03 00:08:15.328 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x80000 length 0x80000 00:08:15.328 Nvme2n2 : 5.10 1555.58 6.08 0.00 0.00 81424.20 10212.04 67799.49 00:08:15.328 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x80000 00:08:15.328 Nvme2n3 : 5.11 1604.67 6.27 0.00 0.00 78795.10 8790.77 66115.03 00:08:15.328 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x80000 length 0x80000 00:08:15.328 Nvme2n3 : 5.10 1555.18 6.07 0.00 0.00 81298.95 10422.59 69483.95 00:08:15.328 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x0 length 0x20000 00:08:15.328 Nvme3n1 : 5.11 1604.27 6.27 0.00 0.00 78676.40 9053.97 65272.80 00:08:15.328 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:15.328 Verification LBA range: start 0x20000 length 0x20000 00:08:15.328 Nvme3n1 : 5.10 1554.76 6.07 0.00 0.00 81171.56 10317.31 71168.41 00:08:15.328 [2024-11-20T17:42:42.504Z] =================================================================================================================== 00:08:15.328 [2024-11-20T17:42:42.504Z] Total : 22022.15 86.02 0.00 0.00 80608.00 8790.77 95593.07 00:08:17.228 00:08:17.228 real 0m7.688s 00:08:17.228 user 0m14.187s 00:08:17.228 sys 0m0.320s 00:08:17.228 ************************************ 00:08:17.228 END TEST bdev_verify 00:08:17.228 ************************************ 00:08:17.228 17:42:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.228 17:42:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:17.228 17:42:44 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.228 17:42:44 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:08:17.228 17:42:44 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:17.228 17:42:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:17.228 ************************************ 00:08:17.228 START TEST bdev_verify_big_io 00:08:17.228 ************************************ 00:08:17.228 17:42:44 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:17.228 [2024-11-20 17:42:44.125832] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:17.228 [2024-11-20 17:42:44.126477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63194 ] 00:08:17.228 [2024-11-20 17:42:44.324837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:17.487 [2024-11-20 17:42:44.447672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.487 [2024-11-20 17:42:44.447707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.422 Running I/O for 5 seconds... 00:08:24.247 1058.00 IOPS, 66.12 MiB/s [2024-11-20T17:42:51.423Z] 3048.00 IOPS, 190.50 MiB/s [2024-11-20T17:42:51.423Z] 3601.00 IOPS, 225.06 MiB/s 00:08:24.247 Latency(us) 00:08:24.247 [2024-11-20T17:42:51.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:24.247 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.247 Verification LBA range: start 0x0 length 0xbd0b 00:08:24.247 Nvme0n1 : 5.76 124.17 7.76 0.00 0.00 995908.67 26530.24 1003937.82 00:08:24.247 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.247 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:24.247 Nvme0n1 : 5.71 121.83 7.61 0.00 0.00 1008330.64 24845.78 1414945.93 00:08:24.247 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.247 Verification LBA range: start 0x0 length 0x4ff8 00:08:24.247 Nvme1n1p1 : 5.76 119.42 7.46 0.00 0.00 1006930.31 60219.42 1354305.39 00:08:24.247 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.247 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:24.248 Nvme1n1p1 : 5.71 125.29 7.83 0.00 0.00 964606.23 42322.04 1435159.44 00:08:24.248 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x0 length 0x4ff7 00:08:24.248 Nvme1n1p2 : 5.76 124.94 7.81 0.00 0.00 939519.53 88434.12 1118481.07 00:08:24.248 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:24.248 Nvme1n1p2 : 5.72 125.59 7.85 0.00 0.00 936583.49 66957.26 1455372.95 00:08:24.248 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x0 length 0x8000 00:08:24.248 Nvme2n1 : 5.80 133.06 8.32 0.00 0.00 868038.82 59798.31 936559.45 00:08:24.248 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x8000 length 0x8000 00:08:24.248 Nvme2n1 : 5.76 129.77 8.11 0.00 0.00 884332.34 44217.06 1468848.63 00:08:24.248 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x0 length 0x8000 00:08:24.248 Nvme2n2 : 5.80 137.67 8.60 0.00 0.00 822532.06 36847.55 956772.96 00:08:24.248 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x8000 length 0x8000 00:08:24.248 Nvme2n2 : 5.88 134.68 8.42 0.00 0.00 826525.38 61482.77 1482324.31 00:08:24.248 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x0 length 0x8000 00:08:24.248 Nvme2n3 : 5.84 142.53 8.91 0.00 0.00 774251.68 30530.83 896132.42 00:08:24.248 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x8000 length 0x8000 00:08:24.248 Nvme2n3 : 5.94 148.54 9.28 0.00 0.00 733171.00 38321.45 1152170.26 00:08:24.248 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x0 length 0x2000 00:08:24.248 Nvme3n1 : 5.94 161.51 10.09 0.00 0.00 668853.96 947.51 963510.80 00:08:24.248 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:24.248 Verification LBA range: start 0x2000 length 0x2000 00:08:24.248 Nvme3n1 : 5.95 161.25 10.08 0.00 0.00 663240.52 1184.39 1394732.41 00:08:24.248 [2024-11-20T17:42:51.424Z] =================================================================================================================== 00:08:24.248 [2024-11-20T17:42:51.424Z] Total : 1890.23 118.14 0.00 0.00 851032.24 947.51 1482324.31 00:08:26.219 ************************************ 00:08:26.219 END TEST bdev_verify_big_io 00:08:26.219 ************************************ 00:08:26.219 00:08:26.219 real 0m9.210s 00:08:26.219 user 0m17.164s 00:08:26.219 sys 0m0.357s 00:08:26.219 17:42:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.219 17:42:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:26.219 17:42:53 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.219 17:42:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:26.219 17:42:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.219 17:42:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:26.219 ************************************ 00:08:26.219 START TEST bdev_write_zeroes 00:08:26.219 ************************************ 00:08:26.219 17:42:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:26.478 [2024-11-20 17:42:53.410125] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:26.478 [2024-11-20 17:42:53.410601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63314 ] 00:08:26.478 [2024-11-20 17:42:53.605081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.736 [2024-11-20 17:42:53.719794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.304 Running I/O for 1 seconds... 00:08:28.680 67648.00 IOPS, 264.25 MiB/s 00:08:28.680 Latency(us) 00:08:28.680 [2024-11-20T17:42:55.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.680 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme0n1 : 1.02 9628.46 37.61 0.00 0.00 13267.78 10896.35 29688.60 00:08:28.680 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme1n1p1 : 1.02 9618.77 37.57 0.00 0.00 13262.63 11054.27 29899.16 00:08:28.680 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme1n1p2 : 1.03 9609.28 37.54 0.00 0.00 13211.47 10633.15 26635.51 00:08:28.680 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme2n1 : 1.03 9600.06 37.50 0.00 0.00 13160.74 10791.07 23792.99 00:08:28.680 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme2n2 : 1.03 9590.96 37.46 0.00 0.00 13140.01 10948.99 21687.42 00:08:28.680 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme2n3 : 1.03 9582.36 37.43 0.00 0.00 13116.66 10422.59 21792.69 00:08:28.680 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:28.680 Nvme3n1 : 1.03 9573.69 37.40 0.00 0.00 13098.09 9422.44 23266.60 00:08:28.680 [2024-11-20T17:42:55.856Z] =================================================================================================================== 00:08:28.680 [2024-11-20T17:42:55.856Z] Total : 67203.59 262.51 0.00 0.00 13179.63 9422.44 29899.16 00:08:29.619 00:08:29.619 real 0m3.359s 00:08:29.619 user 0m2.950s 00:08:29.619 sys 0m0.290s 00:08:29.619 17:42:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.619 17:42:56 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:29.619 ************************************ 00:08:29.619 END TEST bdev_write_zeroes 00:08:29.619 ************************************ 00:08:29.620 17:42:56 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.620 17:42:56 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:29.620 17:42:56 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.620 17:42:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:29.620 ************************************ 00:08:29.620 START TEST bdev_json_nonenclosed 00:08:29.620 ************************************ 00:08:29.620 17:42:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:29.879 [2024-11-20 17:42:56.815377] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:29.879 [2024-11-20 17:42:56.815493] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63367 ] 00:08:29.879 [2024-11-20 17:42:56.994821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.139 [2024-11-20 17:42:57.109764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.139 [2024-11-20 17:42:57.109869] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:30.139 [2024-11-20 17:42:57.109891] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:30.139 [2024-11-20 17:42:57.109904] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.400 00:08:30.400 real 0m0.632s 00:08:30.400 user 0m0.401s 00:08:30.400 sys 0m0.126s 00:08:30.400 17:42:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.400 17:42:57 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:30.400 ************************************ 00:08:30.400 END TEST bdev_json_nonenclosed 00:08:30.400 ************************************ 00:08:30.400 17:42:57 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.400 17:42:57 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:08:30.400 17:42:57 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.400 17:42:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:30.400 ************************************ 00:08:30.400 START TEST bdev_json_nonarray 00:08:30.400 ************************************ 00:08:30.400 17:42:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:30.400 [2024-11-20 17:42:57.514117] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:30.400 [2024-11-20 17:42:57.514396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63393 ] 00:08:30.657 [2024-11-20 17:42:57.694166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.657 [2024-11-20 17:42:57.814646] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.657 [2024-11-20 17:42:57.814752] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:30.657 [2024-11-20 17:42:57.814793] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:30.657 [2024-11-20 17:42:57.814807] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:30.916 00:08:30.916 real 0m0.656s 00:08:30.916 user 0m0.408s 00:08:30.916 sys 0m0.143s 00:08:30.916 ************************************ 00:08:30.916 END TEST bdev_json_nonarray 00:08:30.916 ************************************ 00:08:30.916 17:42:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.916 17:42:58 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:31.174 17:42:58 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:31.174 17:42:58 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:31.174 17:42:58 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:31.174 17:42:58 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.175 17:42:58 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.175 17:42:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:31.175 ************************************ 00:08:31.175 START TEST bdev_gpt_uuid 00:08:31.175 ************************************ 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63418 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63418 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63418 ']' 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:31.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:31.175 17:42:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:31.175 [2024-11-20 17:42:58.258085] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:31.175 [2024-11-20 17:42:58.258234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63418 ] 00:08:31.433 [2024-11-20 17:42:58.440849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.433 [2024-11-20 17:42:58.550968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.369 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:32.369 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:08:32.369 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:32.369 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.369 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:32.629 Some configs were skipped because the RPC state that can call them passed over. 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.629 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:32.889 { 00:08:32.889 "name": "Nvme1n1p1", 00:08:32.889 "aliases": [ 00:08:32.889 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:32.889 ], 00:08:32.889 "product_name": "GPT Disk", 00:08:32.889 "block_size": 4096, 00:08:32.889 "num_blocks": 655104, 00:08:32.889 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:32.889 "assigned_rate_limits": { 00:08:32.889 "rw_ios_per_sec": 0, 00:08:32.889 "rw_mbytes_per_sec": 0, 00:08:32.889 "r_mbytes_per_sec": 0, 00:08:32.889 "w_mbytes_per_sec": 0 00:08:32.889 }, 00:08:32.889 "claimed": false, 00:08:32.889 "zoned": false, 00:08:32.889 "supported_io_types": { 00:08:32.889 "read": true, 00:08:32.889 "write": true, 00:08:32.889 "unmap": true, 00:08:32.889 "flush": true, 00:08:32.889 "reset": true, 00:08:32.889 "nvme_admin": false, 00:08:32.889 "nvme_io": false, 00:08:32.889 "nvme_io_md": false, 00:08:32.889 "write_zeroes": true, 00:08:32.889 "zcopy": false, 00:08:32.889 "get_zone_info": false, 00:08:32.889 "zone_management": false, 00:08:32.889 "zone_append": false, 00:08:32.889 "compare": true, 00:08:32.889 "compare_and_write": false, 00:08:32.889 "abort": true, 00:08:32.889 "seek_hole": false, 00:08:32.889 "seek_data": false, 00:08:32.889 "copy": true, 00:08:32.889 "nvme_iov_md": false 00:08:32.889 }, 00:08:32.889 "driver_specific": { 00:08:32.889 "gpt": { 00:08:32.889 "base_bdev": "Nvme1n1", 00:08:32.889 "offset_blocks": 256, 00:08:32.889 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:32.889 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:32.889 "partition_name": "SPDK_TEST_first" 00:08:32.889 } 00:08:32.889 } 00:08:32.889 } 00:08:32.889 ]' 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:32.889 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:32.889 { 00:08:32.889 "name": "Nvme1n1p2", 00:08:32.889 "aliases": [ 00:08:32.889 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:32.889 ], 00:08:32.889 "product_name": "GPT Disk", 00:08:32.889 "block_size": 4096, 00:08:32.889 "num_blocks": 655103, 00:08:32.889 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:32.889 "assigned_rate_limits": { 00:08:32.889 "rw_ios_per_sec": 0, 00:08:32.889 "rw_mbytes_per_sec": 0, 00:08:32.889 "r_mbytes_per_sec": 0, 00:08:32.889 "w_mbytes_per_sec": 0 00:08:32.889 }, 00:08:32.889 "claimed": false, 00:08:32.889 "zoned": false, 00:08:32.889 "supported_io_types": { 00:08:32.889 "read": true, 00:08:32.889 "write": true, 00:08:32.889 "unmap": true, 00:08:32.889 "flush": true, 00:08:32.889 "reset": true, 00:08:32.889 "nvme_admin": false, 00:08:32.889 "nvme_io": false, 00:08:32.889 "nvme_io_md": false, 00:08:32.889 "write_zeroes": true, 00:08:32.889 "zcopy": false, 00:08:32.889 "get_zone_info": false, 00:08:32.889 "zone_management": false, 00:08:32.889 "zone_append": false, 00:08:32.889 "compare": true, 00:08:32.889 "compare_and_write": false, 00:08:32.890 "abort": true, 00:08:32.890 "seek_hole": false, 00:08:32.890 "seek_data": false, 00:08:32.890 "copy": true, 00:08:32.890 "nvme_iov_md": false 00:08:32.890 }, 00:08:32.890 "driver_specific": { 00:08:32.890 "gpt": { 00:08:32.890 "base_bdev": "Nvme1n1", 00:08:32.890 "offset_blocks": 655360, 00:08:32.890 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:32.890 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:32.890 "partition_name": "SPDK_TEST_second" 00:08:32.890 } 00:08:32.890 } 00:08:32.890 } 00:08:32.890 ]' 00:08:32.890 17:42:59 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:32.890 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:32.890 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63418 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63418 ']' 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63418 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:08:33.149 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63418 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:33.150 killing process with pid 63418 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63418' 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63418 00:08:33.150 17:43:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63418 00:08:35.686 00:08:35.686 real 0m4.390s 00:08:35.686 user 0m4.557s 00:08:35.686 sys 0m0.563s 00:08:35.686 17:43:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:35.686 ************************************ 00:08:35.686 END TEST bdev_gpt_uuid 00:08:35.686 ************************************ 00:08:35.686 17:43:02 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:35.686 17:43:02 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:36.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:36.273 Waiting for block devices as requested 00:08:36.533 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.533 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.533 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:36.792 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:42.062 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:42.062 17:43:08 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:42.062 17:43:08 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:42.062 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:42.062 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:42.062 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:42.062 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:42.062 17:43:09 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:42.062 00:08:42.062 real 1m6.053s 00:08:42.062 user 1m22.110s 00:08:42.062 sys 0m12.484s 00:08:42.062 ************************************ 00:08:42.062 END TEST blockdev_nvme_gpt 00:08:42.062 ************************************ 00:08:42.062 17:43:09 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.062 17:43:09 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:42.319 17:43:09 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:42.319 17:43:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:42.319 17:43:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.319 17:43:09 -- common/autotest_common.sh@10 -- # set +x 00:08:42.319 ************************************ 00:08:42.319 START TEST nvme 00:08:42.319 ************************************ 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:42.319 * Looking for test storage... 00:08:42.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.319 17:43:09 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.319 17:43:09 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.319 17:43:09 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.319 17:43:09 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.319 17:43:09 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.319 17:43:09 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:42.319 17:43:09 nvme -- scripts/common.sh@345 -- # : 1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.319 17:43:09 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.319 17:43:09 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@353 -- # local d=1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.319 17:43:09 nvme -- scripts/common.sh@355 -- # echo 1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.319 17:43:09 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@353 -- # local d=2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.319 17:43:09 nvme -- scripts/common.sh@355 -- # echo 2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.319 17:43:09 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.319 17:43:09 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.319 17:43:09 nvme -- scripts/common.sh@368 -- # return 0 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.319 --rc genhtml_branch_coverage=1 00:08:42.319 --rc genhtml_function_coverage=1 00:08:42.319 --rc genhtml_legend=1 00:08:42.319 --rc geninfo_all_blocks=1 00:08:42.319 --rc geninfo_unexecuted_blocks=1 00:08:42.319 00:08:42.319 ' 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.319 --rc genhtml_branch_coverage=1 00:08:42.319 --rc genhtml_function_coverage=1 00:08:42.319 --rc genhtml_legend=1 00:08:42.319 --rc geninfo_all_blocks=1 00:08:42.319 --rc geninfo_unexecuted_blocks=1 00:08:42.319 00:08:42.319 ' 00:08:42.319 17:43:09 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.319 --rc genhtml_branch_coverage=1 00:08:42.319 --rc genhtml_function_coverage=1 00:08:42.320 --rc genhtml_legend=1 00:08:42.320 --rc geninfo_all_blocks=1 00:08:42.320 --rc geninfo_unexecuted_blocks=1 00:08:42.320 00:08:42.320 ' 00:08:42.320 17:43:09 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.320 --rc genhtml_branch_coverage=1 00:08:42.320 --rc genhtml_function_coverage=1 00:08:42.320 --rc genhtml_legend=1 00:08:42.320 --rc geninfo_all_blocks=1 00:08:42.320 --rc geninfo_unexecuted_blocks=1 00:08:42.320 00:08:42.320 ' 00:08:42.320 17:43:09 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:43.255 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:43.824 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:43.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:44.083 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:44.083 17:43:11 nvme -- nvme/nvme.sh@79 -- # uname 00:08:44.083 17:43:11 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:44.083 17:43:11 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:44.083 17:43:11 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1075 -- # stubpid=64087 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:44.083 Waiting for stub to ready for secondary processes... 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64087 ]] 00:08:44.083 17:43:11 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:44.083 [2024-11-20 17:43:11.186140] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:08:44.083 [2024-11-20 17:43:11.186289] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:45.018 17:43:12 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:45.018 17:43:12 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64087 ]] 00:08:45.018 17:43:12 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:08:45.277 [2024-11-20 17:43:12.248492] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:45.277 [2024-11-20 17:43:12.357131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.277 [2024-11-20 17:43:12.357315] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.277 [2024-11-20 17:43:12.357365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.277 [2024-11-20 17:43:12.375462] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:45.277 [2024-11-20 17:43:12.375507] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:45.277 [2024-11-20 17:43:12.391472] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:45.277 [2024-11-20 17:43:12.391591] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:45.277 [2024-11-20 17:43:12.394580] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:45.278 [2024-11-20 17:43:12.394785] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:45.278 [2024-11-20 17:43:12.394863] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:45.278 [2024-11-20 17:43:12.399080] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:45.278 [2024-11-20 17:43:12.399319] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:45.278 [2024-11-20 17:43:12.399421] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:45.278 [2024-11-20 17:43:12.403608] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:45.278 [2024-11-20 17:43:12.403871] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:45.278 [2024-11-20 17:43:12.403972] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:45.278 [2024-11-20 17:43:12.404044] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:45.278 [2024-11-20 17:43:12.404119] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:46.216 17:43:13 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:46.216 done. 00:08:46.216 17:43:13 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:08:46.216 17:43:13 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:46.216 17:43:13 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:08:46.216 17:43:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.216 17:43:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.216 ************************************ 00:08:46.216 START TEST nvme_reset 00:08:46.216 ************************************ 00:08:46.216 17:43:13 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:46.514 Initializing NVMe Controllers 00:08:46.514 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:46.514 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:46.514 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:46.514 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:46.514 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:46.514 00:08:46.514 real 0m0.302s 00:08:46.514 user 0m0.095s 00:08:46.514 sys 0m0.164s 00:08:46.514 17:43:13 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.514 17:43:13 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:46.514 ************************************ 00:08:46.514 END TEST nvme_reset 00:08:46.514 ************************************ 00:08:46.514 17:43:13 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:46.515 17:43:13 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:46.515 17:43:13 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.515 17:43:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:46.515 ************************************ 00:08:46.515 START TEST nvme_identify 00:08:46.515 ************************************ 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:08:46.515 17:43:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:46.515 17:43:13 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:46.515 17:43:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:46.515 17:43:13 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:46.515 17:43:13 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:46.515 17:43:13 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:47.085 [2024-11-20 17:43:13.967968] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64121 terminated unexpected 00:08:47.085 ===================================================== 00:08:47.085 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.085 ===================================================== 00:08:47.085 Controller Capabilities/Features 00:08:47.085 ================================ 00:08:47.085 Vendor ID: 1b36 00:08:47.085 Subsystem Vendor ID: 1af4 00:08:47.085 Serial Number: 12340 00:08:47.085 Model Number: QEMU NVMe Ctrl 00:08:47.085 Firmware Version: 8.0.0 00:08:47.085 Recommended Arb Burst: 6 00:08:47.085 IEEE OUI Identifier: 00 54 52 00:08:47.085 Multi-path I/O 00:08:47.085 May have multiple subsystem ports: No 00:08:47.085 May have multiple controllers: No 00:08:47.085 Associated with SR-IOV VF: No 00:08:47.085 Max Data Transfer Size: 524288 00:08:47.085 Max Number of Namespaces: 256 00:08:47.085 Max Number of I/O Queues: 64 00:08:47.085 NVMe Specification Version (VS): 1.4 00:08:47.085 NVMe Specification Version (Identify): 1.4 00:08:47.085 Maximum Queue Entries: 2048 00:08:47.085 Contiguous Queues Required: Yes 00:08:47.085 Arbitration Mechanisms Supported 00:08:47.085 Weighted Round Robin: Not Supported 00:08:47.085 Vendor Specific: Not Supported 00:08:47.085 Reset Timeout: 7500 ms 00:08:47.085 Doorbell Stride: 4 bytes 00:08:47.085 NVM Subsystem Reset: Not Supported 00:08:47.085 Command Sets Supported 00:08:47.085 NVM Command Set: Supported 00:08:47.085 Boot Partition: Not Supported 00:08:47.085 Memory Page Size Minimum: 4096 bytes 00:08:47.085 Memory Page Size Maximum: 65536 bytes 00:08:47.085 Persistent Memory Region: Not Supported 00:08:47.085 Optional Asynchronous Events Supported 00:08:47.085 Namespace Attribute Notices: Supported 00:08:47.085 Firmware Activation Notices: Not Supported 00:08:47.085 ANA Change Notices: Not Supported 00:08:47.085 PLE Aggregate Log Change Notices: Not Supported 00:08:47.085 LBA Status Info Alert Notices: Not Supported 00:08:47.086 EGE Aggregate Log Change Notices: Not Supported 00:08:47.086 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.086 Zone Descriptor Change Notices: Not Supported 00:08:47.086 Discovery Log Change Notices: Not Supported 00:08:47.086 Controller Attributes 00:08:47.086 128-bit Host Identifier: Not Supported 00:08:47.086 Non-Operational Permissive Mode: Not Supported 00:08:47.086 NVM Sets: Not Supported 00:08:47.086 Read Recovery Levels: Not Supported 00:08:47.086 Endurance Groups: Not Supported 00:08:47.086 Predictable Latency Mode: Not Supported 00:08:47.086 Traffic Based Keep ALive: Not Supported 00:08:47.086 Namespace Granularity: Not Supported 00:08:47.086 SQ Associations: Not Supported 00:08:47.086 UUID List: Not Supported 00:08:47.086 Multi-Domain Subsystem: Not Supported 00:08:47.086 Fixed Capacity Management: Not Supported 00:08:47.086 Variable Capacity Management: Not Supported 00:08:47.086 Delete Endurance Group: Not Supported 00:08:47.086 Delete NVM Set: Not Supported 00:08:47.086 Extended LBA Formats Supported: Supported 00:08:47.086 Flexible Data Placement Supported: Not Supported 00:08:47.086 00:08:47.086 Controller Memory Buffer Support 00:08:47.086 ================================ 00:08:47.086 Supported: No 00:08:47.086 00:08:47.086 Persistent Memory Region Support 00:08:47.086 ================================ 00:08:47.086 Supported: No 00:08:47.086 00:08:47.086 Admin Command Set Attributes 00:08:47.086 ============================ 00:08:47.086 Security Send/Receive: Not Supported 00:08:47.086 Format NVM: Supported 00:08:47.086 Firmware Activate/Download: Not Supported 00:08:47.086 Namespace Management: Supported 00:08:47.086 Device Self-Test: Not Supported 00:08:47.086 Directives: Supported 00:08:47.086 NVMe-MI: Not Supported 00:08:47.086 Virtualization Management: Not Supported 00:08:47.086 Doorbell Buffer Config: Supported 00:08:47.086 Get LBA Status Capability: Not Supported 00:08:47.086 Command & Feature Lockdown Capability: Not Supported 00:08:47.086 Abort Command Limit: 4 00:08:47.086 Async Event Request Limit: 4 00:08:47.086 Number of Firmware Slots: N/A 00:08:47.086 Firmware Slot 1 Read-Only: N/A 00:08:47.086 Firmware Activation Without Reset: N/A 00:08:47.086 Multiple Update Detection Support: N/A 00:08:47.086 Firmware Update Granularity: No Information Provided 00:08:47.086 Per-Namespace SMART Log: Yes 00:08:47.086 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.086 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:47.086 Command Effects Log Page: Supported 00:08:47.086 Get Log Page Extended Data: Supported 00:08:47.086 Telemetry Log Pages: Not Supported 00:08:47.086 Persistent Event Log Pages: Not Supported 00:08:47.086 Supported Log Pages Log Page: May Support 00:08:47.086 Commands Supported & Effects Log Page: Not Supported 00:08:47.086 Feature Identifiers & Effects Log Page:May Support 00:08:47.086 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.086 Data Area 4 for Telemetry Log: Not Supported 00:08:47.086 Error Log Page Entries Supported: 1 00:08:47.086 Keep Alive: Not Supported 00:08:47.086 00:08:47.086 NVM Command Set Attributes 00:08:47.086 ========================== 00:08:47.086 Submission Queue Entry Size 00:08:47.086 Max: 64 00:08:47.086 Min: 64 00:08:47.086 Completion Queue Entry Size 00:08:47.086 Max: 16 00:08:47.086 Min: 16 00:08:47.086 Number of Namespaces: 256 00:08:47.086 Compare Command: Supported 00:08:47.086 Write Uncorrectable Command: Not Supported 00:08:47.086 Dataset Management Command: Supported 00:08:47.086 Write Zeroes Command: Supported 00:08:47.086 Set Features Save Field: Supported 00:08:47.086 Reservations: Not Supported 00:08:47.086 Timestamp: Supported 00:08:47.086 Copy: Supported 00:08:47.086 Volatile Write Cache: Present 00:08:47.086 Atomic Write Unit (Normal): 1 00:08:47.086 Atomic Write Unit (PFail): 1 00:08:47.086 Atomic Compare & Write Unit: 1 00:08:47.086 Fused Compare & Write: Not Supported 00:08:47.086 Scatter-Gather List 00:08:47.086 SGL Command Set: Supported 00:08:47.086 SGL Keyed: Not Supported 00:08:47.086 SGL Bit Bucket Descriptor: Not Supported 00:08:47.086 SGL Metadata Pointer: Not Supported 00:08:47.086 Oversized SGL: Not Supported 00:08:47.086 SGL Metadata Address: Not Supported 00:08:47.086 SGL Offset: Not Supported 00:08:47.086 Transport SGL Data Block: Not Supported 00:08:47.086 Replay Protected Memory Block: Not Supported 00:08:47.086 00:08:47.086 Firmware Slot Information 00:08:47.086 ========================= 00:08:47.086 Active slot: 1 00:08:47.086 Slot 1 Firmware Revision: 1.0 00:08:47.086 00:08:47.086 00:08:47.086 Commands Supported and Effects 00:08:47.086 ============================== 00:08:47.086 Admin Commands 00:08:47.086 -------------- 00:08:47.086 Delete I/O Submission Queue (00h): Supported 00:08:47.086 Create I/O Submission Queue (01h): Supported 00:08:47.086 Get Log Page (02h): Supported 00:08:47.086 Delete I/O Completion Queue (04h): Supported 00:08:47.086 Create I/O Completion Queue (05h): Supported 00:08:47.086 Identify (06h): Supported 00:08:47.086 Abort (08h): Supported 00:08:47.086 Set Features (09h): Supported 00:08:47.086 Get Features (0Ah): Supported 00:08:47.086 Asynchronous Event Request (0Ch): Supported 00:08:47.086 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.086 Directive Send (19h): Supported 00:08:47.086 Directive Receive (1Ah): Supported 00:08:47.086 Virtualization Management (1Ch): Supported 00:08:47.086 Doorbell Buffer Config (7Ch): Supported 00:08:47.086 Format NVM (80h): Supported LBA-Change 00:08:47.086 I/O Commands 00:08:47.086 ------------ 00:08:47.086 Flush (00h): Supported LBA-Change 00:08:47.086 Write (01h): Supported LBA-Change 00:08:47.086 Read (02h): Supported 00:08:47.086 Compare (05h): Supported 00:08:47.086 Write Zeroes (08h): Supported LBA-Change 00:08:47.086 Dataset Management (09h): Supported LBA-Change 00:08:47.086 Unknown (0Ch): Supported 00:08:47.086 Unknown (12h): Supported 00:08:47.086 Copy (19h): Supported LBA-Change 00:08:47.086 Unknown (1Dh): Supported LBA-Change 00:08:47.086 00:08:47.086 Error Log 00:08:47.086 ========= 00:08:47.086 00:08:47.086 Arbitration 00:08:47.086 =========== 00:08:47.086 Arbitration Burst: no limit 00:08:47.086 00:08:47.086 Power Management 00:08:47.086 ================ 00:08:47.086 Number of Power States: 1 00:08:47.086 Current Power State: Power State #0 00:08:47.086 Power State #0: 00:08:47.086 Max Power: 25.00 W 00:08:47.086 Non-Operational State: Operational 00:08:47.086 Entry Latency: 16 microseconds 00:08:47.086 Exit Latency: 4 microseconds 00:08:47.086 Relative Read Throughput: 0 00:08:47.086 Relative Read Latency: 0 00:08:47.086 Relative Write Throughput: 0 00:08:47.086 Relative Write Latency: 0 00:08:47.086 Idle Power[2024-11-20 17:43:13.969398] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64121 terminated unexpected 00:08:47.086 : Not Reported 00:08:47.086 Active Power: Not Reported 00:08:47.086 Non-Operational Permissive Mode: Not Supported 00:08:47.086 00:08:47.086 Health Information 00:08:47.086 ================== 00:08:47.086 Critical Warnings: 00:08:47.086 Available Spare Space: OK 00:08:47.086 Temperature: OK 00:08:47.086 Device Reliability: OK 00:08:47.086 Read Only: No 00:08:47.086 Volatile Memory Backup: OK 00:08:47.086 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.086 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.086 Available Spare: 0% 00:08:47.086 Available Spare Threshold: 0% 00:08:47.086 Life Percentage Used: 0% 00:08:47.086 Data Units Read: 773 00:08:47.086 Data Units Written: 701 00:08:47.086 Host Read Commands: 37435 00:08:47.086 Host Write Commands: 37221 00:08:47.086 Controller Busy Time: 0 minutes 00:08:47.086 Power Cycles: 0 00:08:47.086 Power On Hours: 0 hours 00:08:47.086 Unsafe Shutdowns: 0 00:08:47.086 Unrecoverable Media Errors: 0 00:08:47.086 Lifetime Error Log Entries: 0 00:08:47.086 Warning Temperature Time: 0 minutes 00:08:47.086 Critical Temperature Time: 0 minutes 00:08:47.086 00:08:47.086 Number of Queues 00:08:47.086 ================ 00:08:47.086 Number of I/O Submission Queues: 64 00:08:47.086 Number of I/O Completion Queues: 64 00:08:47.086 00:08:47.086 ZNS Specific Controller Data 00:08:47.086 ============================ 00:08:47.086 Zone Append Size Limit: 0 00:08:47.086 00:08:47.086 00:08:47.086 Active Namespaces 00:08:47.086 ================= 00:08:47.086 Namespace ID:1 00:08:47.086 Error Recovery Timeout: Unlimited 00:08:47.086 Command Set Identifier: NVM (00h) 00:08:47.086 Deallocate: Supported 00:08:47.086 Deallocated/Unwritten Error: Supported 00:08:47.086 Deallocated Read Value: All 0x00 00:08:47.086 Deallocate in Write Zeroes: Not Supported 00:08:47.086 Deallocated Guard Field: 0xFFFF 00:08:47.086 Flush: Supported 00:08:47.086 Reservation: Not Supported 00:08:47.086 Metadata Transferred as: Separate Metadata Buffer 00:08:47.086 Namespace Sharing Capabilities: Private 00:08:47.086 Size (in LBAs): 1548666 (5GiB) 00:08:47.086 Capacity (in LBAs): 1548666 (5GiB) 00:08:47.086 Utilization (in LBAs): 1548666 (5GiB) 00:08:47.086 Thin Provisioning: Not Supported 00:08:47.086 Per-NS Atomic Units: No 00:08:47.086 Maximum Single Source Range Length: 128 00:08:47.086 Maximum Copy Length: 128 00:08:47.086 Maximum Source Range Count: 128 00:08:47.086 NGUID/EUI64 Never Reused: No 00:08:47.086 Namespace Write Protected: No 00:08:47.086 Number of LBA Formats: 8 00:08:47.086 Current LBA Format: LBA Format #07 00:08:47.086 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.086 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.086 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.086 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.086 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.086 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.086 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.086 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.086 00:08:47.086 NVM Specific Namespace Data 00:08:47.086 =========================== 00:08:47.086 Logical Block Storage Tag Mask: 0 00:08:47.086 Protection Information Capabilities: 00:08:47.086 16b Guard Protection Information Storage Tag Support: No 00:08:47.086 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.086 Storage Tag Check Read Support: No 00:08:47.086 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.086 ===================================================== 00:08:47.086 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.086 ===================================================== 00:08:47.086 Controller Capabilities/Features 00:08:47.086 ================================ 00:08:47.086 Vendor ID: 1b36 00:08:47.086 Subsystem Vendor ID: 1af4 00:08:47.086 Serial Number: 12341 00:08:47.086 Model Number: QEMU NVMe Ctrl 00:08:47.086 Firmware Version: 8.0.0 00:08:47.086 Recommended Arb Burst: 6 00:08:47.086 IEEE OUI Identifier: 00 54 52 00:08:47.086 Multi-path I/O 00:08:47.086 May have multiple subsystem ports: No 00:08:47.086 May have multiple controllers: No 00:08:47.086 Associated with SR-IOV VF: No 00:08:47.086 Max Data Transfer Size: 524288 00:08:47.086 Max Number of Namespaces: 256 00:08:47.086 Max Number of I/O Queues: 64 00:08:47.086 NVMe Specification Version (VS): 1.4 00:08:47.086 NVMe Specification Version (Identify): 1.4 00:08:47.086 Maximum Queue Entries: 2048 00:08:47.086 Contiguous Queues Required: Yes 00:08:47.086 Arbitration Mechanisms Supported 00:08:47.086 Weighted Round Robin: Not Supported 00:08:47.086 Vendor Specific: Not Supported 00:08:47.086 Reset Timeout: 7500 ms 00:08:47.086 Doorbell Stride: 4 bytes 00:08:47.086 NVM Subsystem Reset: Not Supported 00:08:47.086 Command Sets Supported 00:08:47.086 NVM Command Set: Supported 00:08:47.086 Boot Partition: Not Supported 00:08:47.086 Memory Page Size Minimum: 4096 bytes 00:08:47.086 Memory Page Size Maximum: 65536 bytes 00:08:47.086 Persistent Memory Region: Not Supported 00:08:47.086 Optional Asynchronous Events Supported 00:08:47.086 Namespace Attribute Notices: Supported 00:08:47.086 Firmware Activation Notices: Not Supported 00:08:47.086 ANA Change Notices: Not Supported 00:08:47.086 PLE Aggregate Log Change Notices: Not Supported 00:08:47.086 LBA Status Info Alert Notices: Not Supported 00:08:47.086 EGE Aggregate Log Change Notices: Not Supported 00:08:47.086 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.086 Zone Descriptor Change Notices: Not Supported 00:08:47.086 Discovery Log Change Notices: Not Supported 00:08:47.086 Controller Attributes 00:08:47.086 128-bit Host Identifier: Not Supported 00:08:47.086 Non-Operational Permissive Mode: Not Supported 00:08:47.086 NVM Sets: Not Supported 00:08:47.086 Read Recovery Levels: Not Supported 00:08:47.086 Endurance Groups: Not Supported 00:08:47.086 Predictable Latency Mode: Not Supported 00:08:47.086 Traffic Based Keep ALive: Not Supported 00:08:47.086 Namespace Granularity: Not Supported 00:08:47.086 SQ Associations: Not Supported 00:08:47.086 UUID List: Not Supported 00:08:47.086 Multi-Domain Subsystem: Not Supported 00:08:47.086 Fixed Capacity Management: Not Supported 00:08:47.086 Variable Capacity Management: Not Supported 00:08:47.086 Delete Endurance Group: Not Supported 00:08:47.086 Delete NVM Set: Not Supported 00:08:47.086 Extended LBA Formats Supported: Supported 00:08:47.086 Flexible Data Placement Supported: Not Supported 00:08:47.086 00:08:47.086 Controller Memory Buffer Support 00:08:47.086 ================================ 00:08:47.086 Supported: No 00:08:47.086 00:08:47.086 Persistent Memory Region Support 00:08:47.086 ================================ 00:08:47.086 Supported: No 00:08:47.086 00:08:47.086 Admin Command Set Attributes 00:08:47.086 ============================ 00:08:47.086 Security Send/Receive: Not Supported 00:08:47.086 Format NVM: Supported 00:08:47.086 Firmware Activate/Download: Not Supported 00:08:47.086 Namespace Management: Supported 00:08:47.086 Device Self-Test: Not Supported 00:08:47.086 Directives: Supported 00:08:47.086 NVMe-MI: Not Supported 00:08:47.086 Virtualization Management: Not Supported 00:08:47.086 Doorbell Buffer Config: Supported 00:08:47.086 Get LBA Status Capability: Not Supported 00:08:47.086 Command & Feature Lockdown Capability: Not Supported 00:08:47.086 Abort Command Limit: 4 00:08:47.086 Async Event Request Limit: 4 00:08:47.086 Number of Firmware Slots: N/A 00:08:47.086 Firmware Slot 1 Read-Only: N/A 00:08:47.086 Firmware Activation Without Reset: N/A 00:08:47.086 Multiple Update Detection Support: N/A 00:08:47.086 Firmware Update Granularity: No Information Provided 00:08:47.086 Per-Namespace SMART Log: Yes 00:08:47.086 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.086 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:47.086 Command Effects Log Page: Supported 00:08:47.086 Get Log Page Extended Data: Supported 00:08:47.087 Telemetry Log Pages: Not Supported 00:08:47.087 Persistent Event Log Pages: Not Supported 00:08:47.087 Supported Log Pages Log Page: May Support 00:08:47.087 Commands Supported & Effects Log Page: Not Supported 00:08:47.087 Feature Identifiers & Effects Log Page:May Support 00:08:47.087 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.087 Data Area 4 for Telemetry Log: Not Supported 00:08:47.087 Error Log Page Entries Supported: 1 00:08:47.087 Keep Alive: Not Supported 00:08:47.087 00:08:47.087 NVM Command Set Attributes 00:08:47.087 ========================== 00:08:47.087 Submission Queue Entry Size 00:08:47.087 Max: 64 00:08:47.087 Min: 64 00:08:47.087 Completion Queue Entry Size 00:08:47.087 Max: 16 00:08:47.087 Min: 16 00:08:47.087 Number of Namespaces: 256 00:08:47.087 Compare Command: Supported 00:08:47.087 Write Uncorrectable Command: Not Supported 00:08:47.087 Dataset Management Command: Supported 00:08:47.087 Write Zeroes Command: Supported 00:08:47.087 Set Features Save Field: Supported 00:08:47.087 Reservations: Not Supported 00:08:47.087 Timestamp: Supported 00:08:47.087 Copy: Supported 00:08:47.087 Volatile Write Cache: Present 00:08:47.087 Atomic Write Unit (Normal): 1 00:08:47.087 Atomic Write Unit (PFail): 1 00:08:47.087 Atomic Compare & Write Unit: 1 00:08:47.087 Fused Compare & Write: Not Supported 00:08:47.087 Scatter-Gather List 00:08:47.087 SGL Command Set: Supported 00:08:47.087 SGL Keyed: Not Supported 00:08:47.087 SGL Bit Bucket Descriptor: Not Supported 00:08:47.087 SGL Metadata Pointer: Not Supported 00:08:47.087 Oversized SGL: Not Supported 00:08:47.087 SGL Metadata Address: Not Supported 00:08:47.087 SGL Offset: Not Supported 00:08:47.087 Transport SGL Data Block: Not Supported 00:08:47.087 Replay Protected Memory Block: Not Supported 00:08:47.087 00:08:47.087 Firmware Slot Information 00:08:47.087 ========================= 00:08:47.087 Active slot: 1 00:08:47.087 Slot 1 Firmware Revision: 1.0 00:08:47.087 00:08:47.087 00:08:47.087 Commands Supported and Effects 00:08:47.087 ============================== 00:08:47.087 Admin Commands 00:08:47.087 -------------- 00:08:47.087 Delete I/O Submission Queue (00h): Supported 00:08:47.087 Create I/O Submission Queue (01h): Supported 00:08:47.087 Get Log Page (02h): Supported 00:08:47.087 Delete I/O Completion Queue (04h): Supported 00:08:47.087 Create I/O Completion Queue (05h): Supported 00:08:47.087 Identify (06h): Supported 00:08:47.087 Abort (08h): Supported 00:08:47.087 Set Features (09h): Supported 00:08:47.087 Get Features (0Ah): Supported 00:08:47.087 Asynchronous Event Request (0Ch): Supported 00:08:47.087 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.087 Directive Send (19h): Supported 00:08:47.087 Directive Receive (1Ah): Supported 00:08:47.087 Virtualization Management (1Ch): Supported 00:08:47.087 Doorbell Buffer Config (7Ch): Supported 00:08:47.087 Format NVM (80h): Supported LBA-Change 00:08:47.087 I/O Commands 00:08:47.087 ------------ 00:08:47.087 Flush (00h): Supported LBA-Change 00:08:47.087 Write (01h): Supported LBA-Change 00:08:47.087 Read (02h): Supported 00:08:47.087 Compare (05h): Supported 00:08:47.087 Write Zeroes (08h): Supported LBA-Change 00:08:47.087 Dataset Management (09h): Supported LBA-Change 00:08:47.087 Unknown (0Ch): Supported 00:08:47.087 Unknown (12h): Supported 00:08:47.087 Copy (19h): Supported LBA-Change 00:08:47.087 Unknown (1Dh): Supported LBA-Change 00:08:47.087 00:08:47.087 Error Log 00:08:47.087 ========= 00:08:47.087 00:08:47.087 Arbitration 00:08:47.087 =========== 00:08:47.087 Arbitration Burst: no limit 00:08:47.087 00:08:47.087 Power Management 00:08:47.087 ================ 00:08:47.087 Number of Power States: 1 00:08:47.087 Current Power State: Power State #0 00:08:47.087 Power State #0: 00:08:47.087 Max Power: 25.00 W 00:08:47.087 Non-Operational State: Operational 00:08:47.087 Entry Latency: 16 microseconds 00:08:47.087 Exit Latency: 4 microseconds 00:08:47.087 Relative Read Throughput: 0 00:08:47.087 Relative Read Latency: 0 00:08:47.087 Relative Write Throughput: 0 00:08:47.087 Relative Write Latency: 0 00:08:47.087 Idle Power: Not Reported 00:08:47.087 Active Power: Not Reported 00:08:47.087 Non-Operational Permissive Mode: Not Supported 00:08:47.087 00:08:47.087 Health Information 00:08:47.087 ================== 00:08:47.087 Critical Warnings: 00:08:47.087 Available Spare Space: OK 00:08:47.087 Temperature: [2024-11-20 17:43:13.970561] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64121 terminated unexpected 00:08:47.087 OK 00:08:47.087 Device Reliability: OK 00:08:47.087 Read Only: No 00:08:47.087 Volatile Memory Backup: OK 00:08:47.087 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.087 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.087 Available Spare: 0% 00:08:47.087 Available Spare Threshold: 0% 00:08:47.087 Life Percentage Used: 0% 00:08:47.087 Data Units Read: 1157 00:08:47.087 Data Units Written: 1024 00:08:47.087 Host Read Commands: 55825 00:08:47.087 Host Write Commands: 54633 00:08:47.087 Controller Busy Time: 0 minutes 00:08:47.087 Power Cycles: 0 00:08:47.087 Power On Hours: 0 hours 00:08:47.087 Unsafe Shutdowns: 0 00:08:47.087 Unrecoverable Media Errors: 0 00:08:47.087 Lifetime Error Log Entries: 0 00:08:47.087 Warning Temperature Time: 0 minutes 00:08:47.087 Critical Temperature Time: 0 minutes 00:08:47.087 00:08:47.087 Number of Queues 00:08:47.087 ================ 00:08:47.087 Number of I/O Submission Queues: 64 00:08:47.087 Number of I/O Completion Queues: 64 00:08:47.087 00:08:47.087 ZNS Specific Controller Data 00:08:47.087 ============================ 00:08:47.087 Zone Append Size Limit: 0 00:08:47.087 00:08:47.087 00:08:47.087 Active Namespaces 00:08:47.087 ================= 00:08:47.087 Namespace ID:1 00:08:47.087 Error Recovery Timeout: Unlimited 00:08:47.087 Command Set Identifier: NVM (00h) 00:08:47.087 Deallocate: Supported 00:08:47.087 Deallocated/Unwritten Error: Supported 00:08:47.087 Deallocated Read Value: All 0x00 00:08:47.087 Deallocate in Write Zeroes: Not Supported 00:08:47.087 Deallocated Guard Field: 0xFFFF 00:08:47.087 Flush: Supported 00:08:47.087 Reservation: Not Supported 00:08:47.087 Namespace Sharing Capabilities: Private 00:08:47.087 Size (in LBAs): 1310720 (5GiB) 00:08:47.087 Capacity (in LBAs): 1310720 (5GiB) 00:08:47.087 Utilization (in LBAs): 1310720 (5GiB) 00:08:47.087 Thin Provisioning: Not Supported 00:08:47.087 Per-NS Atomic Units: No 00:08:47.087 Maximum Single Source Range Length: 128 00:08:47.087 Maximum Copy Length: 128 00:08:47.087 Maximum Source Range Count: 128 00:08:47.087 NGUID/EUI64 Never Reused: No 00:08:47.087 Namespace Write Protected: No 00:08:47.087 Number of LBA Formats: 8 00:08:47.087 Current LBA Format: LBA Format #04 00:08:47.087 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.087 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.087 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.087 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.087 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.087 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.087 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.087 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.087 00:08:47.087 NVM Specific Namespace Data 00:08:47.087 =========================== 00:08:47.087 Logical Block Storage Tag Mask: 0 00:08:47.087 Protection Information Capabilities: 00:08:47.087 16b Guard Protection Information Storage Tag Support: No 00:08:47.087 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.087 Storage Tag Check Read Support: No 00:08:47.087 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.087 ===================================================== 00:08:47.087 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:47.087 ===================================================== 00:08:47.087 Controller Capabilities/Features 00:08:47.087 ================================ 00:08:47.087 Vendor ID: 1b36 00:08:47.087 Subsystem Vendor ID: 1af4 00:08:47.087 Serial Number: 12343 00:08:47.087 Model Number: QEMU NVMe Ctrl 00:08:47.087 Firmware Version: 8.0.0 00:08:47.087 Recommended Arb Burst: 6 00:08:47.087 IEEE OUI Identifier: 00 54 52 00:08:47.087 Multi-path I/O 00:08:47.087 May have multiple subsystem ports: No 00:08:47.087 May have multiple controllers: Yes 00:08:47.087 Associated with SR-IOV VF: No 00:08:47.087 Max Data Transfer Size: 524288 00:08:47.087 Max Number of Namespaces: 256 00:08:47.087 Max Number of I/O Queues: 64 00:08:47.087 NVMe Specification Version (VS): 1.4 00:08:47.087 NVMe Specification Version (Identify): 1.4 00:08:47.087 Maximum Queue Entries: 2048 00:08:47.087 Contiguous Queues Required: Yes 00:08:47.087 Arbitration Mechanisms Supported 00:08:47.087 Weighted Round Robin: Not Supported 00:08:47.087 Vendor Specific: Not Supported 00:08:47.087 Reset Timeout: 7500 ms 00:08:47.087 Doorbell Stride: 4 bytes 00:08:47.087 NVM Subsystem Reset: Not Supported 00:08:47.087 Command Sets Supported 00:08:47.087 NVM Command Set: Supported 00:08:47.087 Boot Partition: Not Supported 00:08:47.087 Memory Page Size Minimum: 4096 bytes 00:08:47.087 Memory Page Size Maximum: 65536 bytes 00:08:47.087 Persistent Memory Region: Not Supported 00:08:47.087 Optional Asynchronous Events Supported 00:08:47.087 Namespace Attribute Notices: Supported 00:08:47.087 Firmware Activation Notices: Not Supported 00:08:47.087 ANA Change Notices: Not Supported 00:08:47.087 PLE Aggregate Log Change Notices: Not Supported 00:08:47.087 LBA Status Info Alert Notices: Not Supported 00:08:47.087 EGE Aggregate Log Change Notices: Not Supported 00:08:47.087 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.087 Zone Descriptor Change Notices: Not Supported 00:08:47.087 Discovery Log Change Notices: Not Supported 00:08:47.087 Controller Attributes 00:08:47.087 128-bit Host Identifier: Not Supported 00:08:47.087 Non-Operational Permissive Mode: Not Supported 00:08:47.087 NVM Sets: Not Supported 00:08:47.087 Read Recovery Levels: Not Supported 00:08:47.087 Endurance Groups: Supported 00:08:47.087 Predictable Latency Mode: Not Supported 00:08:47.087 Traffic Based Keep ALive: Not Supported 00:08:47.087 Namespace Granularity: Not Supported 00:08:47.087 SQ Associations: Not Supported 00:08:47.087 UUID List: Not Supported 00:08:47.087 Multi-Domain Subsystem: Not Supported 00:08:47.087 Fixed Capacity Management: Not Supported 00:08:47.087 Variable Capacity Management: Not Supported 00:08:47.087 Delete Endurance Group: Not Supported 00:08:47.087 Delete NVM Set: Not Supported 00:08:47.087 Extended LBA Formats Supported: Supported 00:08:47.087 Flexible Data Placement Supported: Supported 00:08:47.087 00:08:47.087 Controller Memory Buffer Support 00:08:47.087 ================================ 00:08:47.087 Supported: No 00:08:47.087 00:08:47.087 Persistent Memory Region Support 00:08:47.087 ================================ 00:08:47.087 Supported: No 00:08:47.087 00:08:47.087 Admin Command Set Attributes 00:08:47.087 ============================ 00:08:47.087 Security Send/Receive: Not Supported 00:08:47.087 Format NVM: Supported 00:08:47.087 Firmware Activate/Download: Not Supported 00:08:47.087 Namespace Management: Supported 00:08:47.087 Device Self-Test: Not Supported 00:08:47.087 Directives: Supported 00:08:47.087 NVMe-MI: Not Supported 00:08:47.087 Virtualization Management: Not Supported 00:08:47.087 Doorbell Buffer Config: Supported 00:08:47.087 Get LBA Status Capability: Not Supported 00:08:47.087 Command & Feature Lockdown Capability: Not Supported 00:08:47.087 Abort Command Limit: 4 00:08:47.087 Async Event Request Limit: 4 00:08:47.087 Number of Firmware Slots: N/A 00:08:47.087 Firmware Slot 1 Read-Only: N/A 00:08:47.087 Firmware Activation Without Reset: N/A 00:08:47.087 Multiple Update Detection Support: N/A 00:08:47.087 Firmware Update Granularity: No Information Provided 00:08:47.087 Per-Namespace SMART Log: Yes 00:08:47.087 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.087 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:47.087 Command Effects Log Page: Supported 00:08:47.087 Get Log Page Extended Data: Supported 00:08:47.087 Telemetry Log Pages: Not Supported 00:08:47.087 Persistent Event Log Pages: Not Supported 00:08:47.087 Supported Log Pages Log Page: May Support 00:08:47.087 Commands Supported & Effects Log Page: Not Supported 00:08:47.087 Feature Identifiers & Effects Log Page:May Support 00:08:47.087 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.087 Data Area 4 for Telemetry Log: Not Supported 00:08:47.087 Error Log Page Entries Supported: 1 00:08:47.087 Keep Alive: Not Supported 00:08:47.087 00:08:47.087 NVM Command Set Attributes 00:08:47.087 ========================== 00:08:47.087 Submission Queue Entry Size 00:08:47.087 Max: 64 00:08:47.087 Min: 64 00:08:47.087 Completion Queue Entry Size 00:08:47.087 Max: 16 00:08:47.087 Min: 16 00:08:47.087 Number of Namespaces: 256 00:08:47.087 Compare Command: Supported 00:08:47.087 Write Uncorrectable Command: Not Supported 00:08:47.087 Dataset Management Command: Supported 00:08:47.087 Write Zeroes Command: Supported 00:08:47.087 Set Features Save Field: Supported 00:08:47.087 Reservations: Not Supported 00:08:47.087 Timestamp: Supported 00:08:47.087 Copy: Supported 00:08:47.087 Volatile Write Cache: Present 00:08:47.087 Atomic Write Unit (Normal): 1 00:08:47.087 Atomic Write Unit (PFail): 1 00:08:47.087 Atomic Compare & Write Unit: 1 00:08:47.087 Fused Compare & Write: Not Supported 00:08:47.087 Scatter-Gather List 00:08:47.087 SGL Command Set: Supported 00:08:47.087 SGL Keyed: Not Supported 00:08:47.087 SGL Bit Bucket Descriptor: Not Supported 00:08:47.087 SGL Metadata Pointer: Not Supported 00:08:47.087 Oversized SGL: Not Supported 00:08:47.087 SGL Metadata Address: Not Supported 00:08:47.087 SGL Offset: Not Supported 00:08:47.087 Transport SGL Data Block: Not Supported 00:08:47.087 Replay Protected Memory Block: Not Supported 00:08:47.087 00:08:47.087 Firmware Slot Information 00:08:47.087 ========================= 00:08:47.087 Active slot: 1 00:08:47.087 Slot 1 Firmware Revision: 1.0 00:08:47.087 00:08:47.087 00:08:47.087 Commands Supported and Effects 00:08:47.087 ============================== 00:08:47.087 Admin Commands 00:08:47.087 -------------- 00:08:47.087 Delete I/O Submission Queue (00h): Supported 00:08:47.088 Create I/O Submission Queue (01h): Supported 00:08:47.088 Get Log Page (02h): Supported 00:08:47.088 Delete I/O Completion Queue (04h): Supported 00:08:47.088 Create I/O Completion Queue (05h): Supported 00:08:47.088 Identify (06h): Supported 00:08:47.088 Abort (08h): Supported 00:08:47.088 Set Features (09h): Supported 00:08:47.088 Get Features (0Ah): Supported 00:08:47.088 Asynchronous Event Request (0Ch): Supported 00:08:47.088 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.088 Directive Send (19h): Supported 00:08:47.088 Directive Receive (1Ah): Supported 00:08:47.088 Virtualization Management (1Ch): Supported 00:08:47.088 Doorbell Buffer Config (7Ch): Supported 00:08:47.088 Format NVM (80h): Supported LBA-Change 00:08:47.088 I/O Commands 00:08:47.088 ------------ 00:08:47.088 Flush (00h): Supported LBA-Change 00:08:47.088 Write (01h): Supported LBA-Change 00:08:47.088 Read (02h): Supported 00:08:47.088 Compare (05h): Supported 00:08:47.088 Write Zeroes (08h): Supported LBA-Change 00:08:47.088 Dataset Management (09h): Supported LBA-Change 00:08:47.088 Unknown (0Ch): Supported 00:08:47.088 Unknown (12h): Supported 00:08:47.088 Copy (19h): Supported LBA-Change 00:08:47.088 Unknown (1Dh): Supported LBA-Change 00:08:47.088 00:08:47.088 Error Log 00:08:47.088 ========= 00:08:47.088 00:08:47.088 Arbitration 00:08:47.088 =========== 00:08:47.088 Arbitration Burst: no limit 00:08:47.088 00:08:47.088 Power Management 00:08:47.088 ================ 00:08:47.088 Number of Power States: 1 00:08:47.088 Current Power State: Power State #0 00:08:47.088 Power State #0: 00:08:47.088 Max Power: 25.00 W 00:08:47.088 Non-Operational State: Operational 00:08:47.088 Entry Latency: 16 microseconds 00:08:47.088 Exit Latency: 4 microseconds 00:08:47.088 Relative Read Throughput: 0 00:08:47.088 Relative Read Latency: 0 00:08:47.088 Relative Write Throughput: 0 00:08:47.088 Relative Write Latency: 0 00:08:47.088 Idle Power: Not Reported 00:08:47.088 Active Power: Not Reported 00:08:47.088 Non-Operational Permissive Mode: Not Supported 00:08:47.088 00:08:47.088 Health Information 00:08:47.088 ================== 00:08:47.088 Critical Warnings: 00:08:47.088 Available Spare Space: OK 00:08:47.088 Temperature: OK 00:08:47.088 Device Reliability: OK 00:08:47.088 Read Only: No 00:08:47.088 Volatile Memory Backup: OK 00:08:47.088 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.088 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.088 Available Spare: 0% 00:08:47.088 Available Spare Threshold: 0% 00:08:47.088 Life Percentage Used: 0% 00:08:47.088 Data Units Read: 906 00:08:47.088 Data Units Written: 835 00:08:47.088 Host Read Commands: 39017 00:08:47.088 Host Write Commands: 38440 00:08:47.088 Controller Busy Time: 0 minutes 00:08:47.088 Power Cycles: 0 00:08:47.088 Power On Hours: 0 hours 00:08:47.088 Unsafe Shutdowns: 0 00:08:47.088 Unrecoverable Media Errors: 0 00:08:47.088 Lifetime Error Log Entries: 0 00:08:47.088 Warning Temperature Time: 0 minutes 00:08:47.088 Critical Temperature Time: 0 minutes 00:08:47.088 00:08:47.088 Number of Queues 00:08:47.088 ================ 00:08:47.088 Number of I/O Submission Queues: 64 00:08:47.088 Number of I/O Completion Queues: 64 00:08:47.088 00:08:47.088 ZNS Specific Controller Data 00:08:47.088 ============================ 00:08:47.088 Zone Append Size Limit: 0 00:08:47.088 00:08:47.088 00:08:47.088 Active Namespaces 00:08:47.088 ================= 00:08:47.088 Namespace ID:1 00:08:47.088 Error Recovery Timeout: Unlimited 00:08:47.088 Command Set Identifier: NVM (00h) 00:08:47.088 Deallocate: Supported 00:08:47.088 Deallocated/Unwritten Error: Supported 00:08:47.088 Deallocated Read Value: All 0x00 00:08:47.088 Deallocate in Write Zeroes: Not Supported 00:08:47.088 Deallocated Guard Field: 0xFFFF 00:08:47.088 Flush: Supported 00:08:47.088 Reservation: Not Supported 00:08:47.088 Namespace Sharing Capabilities: Multiple Controllers 00:08:47.088 Size (in LBAs): 262144 (1GiB) 00:08:47.088 Capacity (in LBAs): 262144 (1GiB) 00:08:47.088 Utilization (in LBAs): 262144 (1GiB) 00:08:47.088 Thin Provisioning: Not Supported 00:08:47.088 Per-NS Atomic Units: No 00:08:47.088 Maximum Single Source Range Length: 128 00:08:47.088 Maximum Copy Length: 128 00:08:47.088 Maximum Source Range Count: 128 00:08:47.088 NGUID/EUI64 Never Reused: No 00:08:47.088 Namespace Write Protected: No 00:08:47.088 Endurance group ID: 1 00:08:47.088 Number of LBA Formats: 8 00:08:47.088 Current LBA Format: LBA Format #04 00:08:47.088 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.088 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.088 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.088 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.088 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.088 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.088 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.088 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.088 00:08:47.088 Get Feature FDP: 00:08:47.088 ================ 00:08:47.088 Enabled: Yes 00:08:47.088 FDP configuration index: 0 00:08:47.088 00:08:47.088 FDP configurations log page 00:08:47.088 =========================== 00:08:47.088 Number of FDP configurations: 1 00:08:47.088 Version: 0 00:08:47.088 Size: 112 00:08:47.088 FDP Configuration Descriptor: 0 00:08:47.088 Descriptor Size: 96 00:08:47.088 Reclaim Group Identifier format: 2 00:08:47.088 FDP Volatile Write Cache: Not Present 00:08:47.088 FDP Configuration: Valid 00:08:47.088 Vendor Specific Size: 0 00:08:47.088 Number of Reclaim Groups: 2 00:08:47.088 Number of Recalim Unit Handles: 8 00:08:47.088 Max Placement Identifiers: 128 00:08:47.088 Number of Namespaces Suppprted: 256 00:08:47.088 Reclaim unit Nominal Size: 6000000 bytes 00:08:47.088 Estimated Reclaim Unit Time Limit: Not Reported 00:08:47.088 RUH Desc #000: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #001: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #002: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #003: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #004: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #005: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #006: RUH Type: Initially Isolated 00:08:47.088 RUH Desc #007: RUH Type: Initially Isolated 00:08:47.088 00:08:47.088 FDP reclaim unit handle usage log page 00:08:47.088 ====================================== 00:08:47.088 Number of Reclaim Unit Handles: 8 00:08:47.088 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:47.088 RUH Usage Desc #001: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #002: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #003: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #004: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #005: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #006: RUH Attributes: Unused 00:08:47.088 RUH Usage Desc #007: RUH Attributes: Unused 00:08:47.088 00:08:47.088 FDP statistics log page 00:08:47.088 ======================= 00:08:47.088 Host bytes with metadata written: 535666688 00:08:47.088 Med[2024-11-20 17:43:13.972596] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64121 terminated unexpected 00:08:47.088 ia bytes with metadata written: 535822336 00:08:47.088 Media bytes erased: 0 00:08:47.088 00:08:47.088 FDP events log page 00:08:47.088 =================== 00:08:47.088 Number of FDP events: 0 00:08:47.088 00:08:47.088 NVM Specific Namespace Data 00:08:47.088 =========================== 00:08:47.088 Logical Block Storage Tag Mask: 0 00:08:47.088 Protection Information Capabilities: 00:08:47.088 16b Guard Protection Information Storage Tag Support: No 00:08:47.088 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.088 Storage Tag Check Read Support: No 00:08:47.088 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.088 ===================================================== 00:08:47.088 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.088 ===================================================== 00:08:47.088 Controller Capabilities/Features 00:08:47.088 ================================ 00:08:47.088 Vendor ID: 1b36 00:08:47.088 Subsystem Vendor ID: 1af4 00:08:47.088 Serial Number: 12342 00:08:47.088 Model Number: QEMU NVMe Ctrl 00:08:47.088 Firmware Version: 8.0.0 00:08:47.088 Recommended Arb Burst: 6 00:08:47.088 IEEE OUI Identifier: 00 54 52 00:08:47.088 Multi-path I/O 00:08:47.088 May have multiple subsystem ports: No 00:08:47.088 May have multiple controllers: No 00:08:47.088 Associated with SR-IOV VF: No 00:08:47.088 Max Data Transfer Size: 524288 00:08:47.088 Max Number of Namespaces: 256 00:08:47.088 Max Number of I/O Queues: 64 00:08:47.088 NVMe Specification Version (VS): 1.4 00:08:47.088 NVMe Specification Version (Identify): 1.4 00:08:47.088 Maximum Queue Entries: 2048 00:08:47.088 Contiguous Queues Required: Yes 00:08:47.088 Arbitration Mechanisms Supported 00:08:47.088 Weighted Round Robin: Not Supported 00:08:47.088 Vendor Specific: Not Supported 00:08:47.088 Reset Timeout: 7500 ms 00:08:47.088 Doorbell Stride: 4 bytes 00:08:47.088 NVM Subsystem Reset: Not Supported 00:08:47.088 Command Sets Supported 00:08:47.088 NVM Command Set: Supported 00:08:47.088 Boot Partition: Not Supported 00:08:47.088 Memory Page Size Minimum: 4096 bytes 00:08:47.088 Memory Page Size Maximum: 65536 bytes 00:08:47.088 Persistent Memory Region: Not Supported 00:08:47.088 Optional Asynchronous Events Supported 00:08:47.088 Namespace Attribute Notices: Supported 00:08:47.088 Firmware Activation Notices: Not Supported 00:08:47.088 ANA Change Notices: Not Supported 00:08:47.088 PLE Aggregate Log Change Notices: Not Supported 00:08:47.088 LBA Status Info Alert Notices: Not Supported 00:08:47.088 EGE Aggregate Log Change Notices: Not Supported 00:08:47.088 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.088 Zone Descriptor Change Notices: Not Supported 00:08:47.088 Discovery Log Change Notices: Not Supported 00:08:47.088 Controller Attributes 00:08:47.088 128-bit Host Identifier: Not Supported 00:08:47.088 Non-Operational Permissive Mode: Not Supported 00:08:47.088 NVM Sets: Not Supported 00:08:47.088 Read Recovery Levels: Not Supported 00:08:47.088 Endurance Groups: Not Supported 00:08:47.088 Predictable Latency Mode: Not Supported 00:08:47.088 Traffic Based Keep ALive: Not Supported 00:08:47.088 Namespace Granularity: Not Supported 00:08:47.088 SQ Associations: Not Supported 00:08:47.088 UUID List: Not Supported 00:08:47.088 Multi-Domain Subsystem: Not Supported 00:08:47.088 Fixed Capacity Management: Not Supported 00:08:47.088 Variable Capacity Management: Not Supported 00:08:47.088 Delete Endurance Group: Not Supported 00:08:47.088 Delete NVM Set: Not Supported 00:08:47.088 Extended LBA Formats Supported: Supported 00:08:47.088 Flexible Data Placement Supported: Not Supported 00:08:47.088 00:08:47.088 Controller Memory Buffer Support 00:08:47.088 ================================ 00:08:47.088 Supported: No 00:08:47.088 00:08:47.088 Persistent Memory Region Support 00:08:47.088 ================================ 00:08:47.088 Supported: No 00:08:47.088 00:08:47.088 Admin Command Set Attributes 00:08:47.088 ============================ 00:08:47.088 Security Send/Receive: Not Supported 00:08:47.088 Format NVM: Supported 00:08:47.088 Firmware Activate/Download: Not Supported 00:08:47.088 Namespace Management: Supported 00:08:47.088 Device Self-Test: Not Supported 00:08:47.088 Directives: Supported 00:08:47.088 NVMe-MI: Not Supported 00:08:47.088 Virtualization Management: Not Supported 00:08:47.088 Doorbell Buffer Config: Supported 00:08:47.088 Get LBA Status Capability: Not Supported 00:08:47.088 Command & Feature Lockdown Capability: Not Supported 00:08:47.088 Abort Command Limit: 4 00:08:47.088 Async Event Request Limit: 4 00:08:47.088 Number of Firmware Slots: N/A 00:08:47.088 Firmware Slot 1 Read-Only: N/A 00:08:47.088 Firmware Activation Without Reset: N/A 00:08:47.088 Multiple Update Detection Support: N/A 00:08:47.088 Firmware Update Granularity: No Information Provided 00:08:47.088 Per-Namespace SMART Log: Yes 00:08:47.088 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.088 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:47.088 Command Effects Log Page: Supported 00:08:47.088 Get Log Page Extended Data: Supported 00:08:47.088 Telemetry Log Pages: Not Supported 00:08:47.088 Persistent Event Log Pages: Not Supported 00:08:47.088 Supported Log Pages Log Page: May Support 00:08:47.088 Commands Supported & Effects Log Page: Not Supported 00:08:47.088 Feature Identifiers & Effects Log Page:May Support 00:08:47.088 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.088 Data Area 4 for Telemetry Log: Not Supported 00:08:47.088 Error Log Page Entries Supported: 1 00:08:47.088 Keep Alive: Not Supported 00:08:47.088 00:08:47.088 NVM Command Set Attributes 00:08:47.088 ========================== 00:08:47.088 Submission Queue Entry Size 00:08:47.088 Max: 64 00:08:47.088 Min: 64 00:08:47.088 Completion Queue Entry Size 00:08:47.088 Max: 16 00:08:47.088 Min: 16 00:08:47.089 Number of Namespaces: 256 00:08:47.089 Compare Command: Supported 00:08:47.089 Write Uncorrectable Command: Not Supported 00:08:47.089 Dataset Management Command: Supported 00:08:47.089 Write Zeroes Command: Supported 00:08:47.089 Set Features Save Field: Supported 00:08:47.089 Reservations: Not Supported 00:08:47.089 Timestamp: Supported 00:08:47.089 Copy: Supported 00:08:47.089 Volatile Write Cache: Present 00:08:47.089 Atomic Write Unit (Normal): 1 00:08:47.089 Atomic Write Unit (PFail): 1 00:08:47.089 Atomic Compare & Write Unit: 1 00:08:47.089 Fused Compare & Write: Not Supported 00:08:47.089 Scatter-Gather List 00:08:47.089 SGL Command Set: Supported 00:08:47.089 SGL Keyed: Not Supported 00:08:47.089 SGL Bit Bucket Descriptor: Not Supported 00:08:47.089 SGL Metadata Pointer: Not Supported 00:08:47.089 Oversized SGL: Not Supported 00:08:47.089 SGL Metadata Address: Not Supported 00:08:47.089 SGL Offset: Not Supported 00:08:47.089 Transport SGL Data Block: Not Supported 00:08:47.089 Replay Protected Memory Block: Not Supported 00:08:47.089 00:08:47.089 Firmware Slot Information 00:08:47.089 ========================= 00:08:47.089 Active slot: 1 00:08:47.089 Slot 1 Firmware Revision: 1.0 00:08:47.089 00:08:47.089 00:08:47.089 Commands Supported and Effects 00:08:47.089 ============================== 00:08:47.089 Admin Commands 00:08:47.089 -------------- 00:08:47.089 Delete I/O Submission Queue (00h): Supported 00:08:47.089 Create I/O Submission Queue (01h): Supported 00:08:47.089 Get Log Page (02h): Supported 00:08:47.089 Delete I/O Completion Queue (04h): Supported 00:08:47.089 Create I/O Completion Queue (05h): Supported 00:08:47.089 Identify (06h): Supported 00:08:47.089 Abort (08h): Supported 00:08:47.089 Set Features (09h): Supported 00:08:47.089 Get Features (0Ah): Supported 00:08:47.089 Asynchronous Event Request (0Ch): Supported 00:08:47.089 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.089 Directive Send (19h): Supported 00:08:47.089 Directive Receive (1Ah): Supported 00:08:47.089 Virtualization Management (1Ch): Supported 00:08:47.089 Doorbell Buffer Config (7Ch): Supported 00:08:47.089 Format NVM (80h): Supported LBA-Change 00:08:47.089 I/O Commands 00:08:47.089 ------------ 00:08:47.089 Flush (00h): Supported LBA-Change 00:08:47.089 Write (01h): Supported LBA-Change 00:08:47.089 Read (02h): Supported 00:08:47.089 Compare (05h): Supported 00:08:47.089 Write Zeroes (08h): Supported LBA-Change 00:08:47.089 Dataset Management (09h): Supported LBA-Change 00:08:47.089 Unknown (0Ch): Supported 00:08:47.089 Unknown (12h): Supported 00:08:47.089 Copy (19h): Supported LBA-Change 00:08:47.089 Unknown (1Dh): Supported LBA-Change 00:08:47.089 00:08:47.089 Error Log 00:08:47.089 ========= 00:08:47.089 00:08:47.089 Arbitration 00:08:47.089 =========== 00:08:47.089 Arbitration Burst: no limit 00:08:47.089 00:08:47.089 Power Management 00:08:47.089 ================ 00:08:47.089 Number of Power States: 1 00:08:47.089 Current Power State: Power State #0 00:08:47.089 Power State #0: 00:08:47.089 Max Power: 25.00 W 00:08:47.089 Non-Operational State: Operational 00:08:47.089 Entry Latency: 16 microseconds 00:08:47.089 Exit Latency: 4 microseconds 00:08:47.089 Relative Read Throughput: 0 00:08:47.089 Relative Read Latency: 0 00:08:47.089 Relative Write Throughput: 0 00:08:47.089 Relative Write Latency: 0 00:08:47.089 Idle Power: Not Reported 00:08:47.089 Active Power: Not Reported 00:08:47.089 Non-Operational Permissive Mode: Not Supported 00:08:47.089 00:08:47.089 Health Information 00:08:47.089 ================== 00:08:47.089 Critical Warnings: 00:08:47.089 Available Spare Space: OK 00:08:47.089 Temperature: OK 00:08:47.089 Device Reliability: OK 00:08:47.089 Read Only: No 00:08:47.089 Volatile Memory Backup: OK 00:08:47.089 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.089 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.089 Available Spare: 0% 00:08:47.089 Available Spare Threshold: 0% 00:08:47.089 Life Percentage Used: 0% 00:08:47.089 Data Units Read: 2458 00:08:47.089 Data Units Written: 2245 00:08:47.089 Host Read Commands: 114844 00:08:47.089 Host Write Commands: 113116 00:08:47.089 Controller Busy Time: 0 minutes 00:08:47.089 Power Cycles: 0 00:08:47.089 Power On Hours: 0 hours 00:08:47.089 Unsafe Shutdowns: 0 00:08:47.089 Unrecoverable Media Errors: 0 00:08:47.089 Lifetime Error Log Entries: 0 00:08:47.089 Warning Temperature Time: 0 minutes 00:08:47.089 Critical Temperature Time: 0 minutes 00:08:47.089 00:08:47.089 Number of Queues 00:08:47.089 ================ 00:08:47.089 Number of I/O Submission Queues: 64 00:08:47.089 Number of I/O Completion Queues: 64 00:08:47.089 00:08:47.089 ZNS Specific Controller Data 00:08:47.089 ============================ 00:08:47.089 Zone Append Size Limit: 0 00:08:47.089 00:08:47.089 00:08:47.089 Active Namespaces 00:08:47.089 ================= 00:08:47.089 Namespace ID:1 00:08:47.089 Error Recovery Timeout: Unlimited 00:08:47.089 Command Set Identifier: NVM (00h) 00:08:47.089 Deallocate: Supported 00:08:47.089 Deallocated/Unwritten Error: Supported 00:08:47.089 Deallocated Read Value: All 0x00 00:08:47.089 Deallocate in Write Zeroes: Not Supported 00:08:47.089 Deallocated Guard Field: 0xFFFF 00:08:47.089 Flush: Supported 00:08:47.089 Reservation: Not Supported 00:08:47.089 Namespace Sharing Capabilities: Private 00:08:47.089 Size (in LBAs): 1048576 (4GiB) 00:08:47.089 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.089 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.089 Thin Provisioning: Not Supported 00:08:47.089 Per-NS Atomic Units: No 00:08:47.089 Maximum Single Source Range Length: 128 00:08:47.089 Maximum Copy Length: 128 00:08:47.089 Maximum Source Range Count: 128 00:08:47.089 NGUID/EUI64 Never Reused: No 00:08:47.089 Namespace Write Protected: No 00:08:47.089 Number of LBA Formats: 8 00:08:47.089 Current LBA Format: LBA Format #04 00:08:47.089 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.089 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.089 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.089 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.089 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.089 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.089 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.089 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.089 00:08:47.089 NVM Specific Namespace Data 00:08:47.089 =========================== 00:08:47.089 Logical Block Storage Tag Mask: 0 00:08:47.089 Protection Information Capabilities: 00:08:47.089 16b Guard Protection Information Storage Tag Support: No 00:08:47.089 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.089 Storage Tag Check Read Support: No 00:08:47.089 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Namespace ID:2 00:08:47.089 Error Recovery Timeout: Unlimited 00:08:47.089 Command Set Identifier: NVM (00h) 00:08:47.089 Deallocate: Supported 00:08:47.089 Deallocated/Unwritten Error: Supported 00:08:47.089 Deallocated Read Value: All 0x00 00:08:47.089 Deallocate in Write Zeroes: Not Supported 00:08:47.089 Deallocated Guard Field: 0xFFFF 00:08:47.089 Flush: Supported 00:08:47.089 Reservation: Not Supported 00:08:47.089 Namespace Sharing Capabilities: Private 00:08:47.089 Size (in LBAs): 1048576 (4GiB) 00:08:47.089 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.089 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.089 Thin Provisioning: Not Supported 00:08:47.089 Per-NS Atomic Units: No 00:08:47.089 Maximum Single Source Range Length: 128 00:08:47.089 Maximum Copy Length: 128 00:08:47.089 Maximum Source Range Count: 128 00:08:47.089 NGUID/EUI64 Never Reused: No 00:08:47.089 Namespace Write Protected: No 00:08:47.089 Number of LBA Formats: 8 00:08:47.089 Current LBA Format: LBA Format #04 00:08:47.089 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.089 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.089 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.089 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.089 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.089 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.089 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.089 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.089 00:08:47.089 NVM Specific Namespace Data 00:08:47.089 =========================== 00:08:47.089 Logical Block Storage Tag Mask: 0 00:08:47.089 Protection Information Capabilities: 00:08:47.089 16b Guard Protection Information Storage Tag Support: No 00:08:47.089 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.089 Storage Tag Check Read Support: No 00:08:47.089 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Namespace ID:3 00:08:47.089 Error Recovery Timeout: Unlimited 00:08:47.089 Command Set Identifier: NVM (00h) 00:08:47.089 Deallocate: Supported 00:08:47.089 Deallocated/Unwritten Error: Supported 00:08:47.089 Deallocated Read Value: All 0x00 00:08:47.089 Deallocate in Write Zeroes: Not Supported 00:08:47.089 Deallocated Guard Field: 0xFFFF 00:08:47.089 Flush: Supported 00:08:47.089 Reservation: Not Supported 00:08:47.089 Namespace Sharing Capabilities: Private 00:08:47.089 Size (in LBAs): 1048576 (4GiB) 00:08:47.089 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.089 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.089 Thin Provisioning: Not Supported 00:08:47.089 Per-NS Atomic Units: No 00:08:47.089 Maximum Single Source Range Length: 128 00:08:47.089 Maximum Copy Length: 128 00:08:47.089 Maximum Source Range Count: 128 00:08:47.089 NGUID/EUI64 Never Reused: No 00:08:47.089 Namespace Write Protected: No 00:08:47.089 Number of LBA Formats: 8 00:08:47.089 Current LBA Format: LBA Format #04 00:08:47.089 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.089 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.089 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.089 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.089 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.089 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.089 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.089 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.089 00:08:47.089 NVM Specific Namespace Data 00:08:47.089 =========================== 00:08:47.089 Logical Block Storage Tag Mask: 0 00:08:47.089 Protection Information Capabilities: 00:08:47.089 16b Guard Protection Information Storage Tag Support: No 00:08:47.089 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.089 Storage Tag Check Read Support: No 00:08:47.089 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.089 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:47.089 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:47.350 ===================================================== 00:08:47.350 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:47.350 ===================================================== 00:08:47.350 Controller Capabilities/Features 00:08:47.350 ================================ 00:08:47.350 Vendor ID: 1b36 00:08:47.350 Subsystem Vendor ID: 1af4 00:08:47.350 Serial Number: 12340 00:08:47.350 Model Number: QEMU NVMe Ctrl 00:08:47.350 Firmware Version: 8.0.0 00:08:47.350 Recommended Arb Burst: 6 00:08:47.350 IEEE OUI Identifier: 00 54 52 00:08:47.350 Multi-path I/O 00:08:47.350 May have multiple subsystem ports: No 00:08:47.350 May have multiple controllers: No 00:08:47.350 Associated with SR-IOV VF: No 00:08:47.350 Max Data Transfer Size: 524288 00:08:47.350 Max Number of Namespaces: 256 00:08:47.350 Max Number of I/O Queues: 64 00:08:47.350 NVMe Specification Version (VS): 1.4 00:08:47.350 NVMe Specification Version (Identify): 1.4 00:08:47.350 Maximum Queue Entries: 2048 00:08:47.350 Contiguous Queues Required: Yes 00:08:47.350 Arbitration Mechanisms Supported 00:08:47.350 Weighted Round Robin: Not Supported 00:08:47.350 Vendor Specific: Not Supported 00:08:47.350 Reset Timeout: 7500 ms 00:08:47.350 Doorbell Stride: 4 bytes 00:08:47.350 NVM Subsystem Reset: Not Supported 00:08:47.350 Command Sets Supported 00:08:47.350 NVM Command Set: Supported 00:08:47.350 Boot Partition: Not Supported 00:08:47.350 Memory Page Size Minimum: 4096 bytes 00:08:47.350 Memory Page Size Maximum: 65536 bytes 00:08:47.350 Persistent Memory Region: Not Supported 00:08:47.350 Optional Asynchronous Events Supported 00:08:47.350 Namespace Attribute Notices: Supported 00:08:47.350 Firmware Activation Notices: Not Supported 00:08:47.350 ANA Change Notices: Not Supported 00:08:47.350 PLE Aggregate Log Change Notices: Not Supported 00:08:47.350 LBA Status Info Alert Notices: Not Supported 00:08:47.350 EGE Aggregate Log Change Notices: Not Supported 00:08:47.350 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.350 Zone Descriptor Change Notices: Not Supported 00:08:47.350 Discovery Log Change Notices: Not Supported 00:08:47.350 Controller Attributes 00:08:47.350 128-bit Host Identifier: Not Supported 00:08:47.350 Non-Operational Permissive Mode: Not Supported 00:08:47.350 NVM Sets: Not Supported 00:08:47.350 Read Recovery Levels: Not Supported 00:08:47.350 Endurance Groups: Not Supported 00:08:47.350 Predictable Latency Mode: Not Supported 00:08:47.350 Traffic Based Keep ALive: Not Supported 00:08:47.350 Namespace Granularity: Not Supported 00:08:47.350 SQ Associations: Not Supported 00:08:47.350 UUID List: Not Supported 00:08:47.350 Multi-Domain Subsystem: Not Supported 00:08:47.350 Fixed Capacity Management: Not Supported 00:08:47.350 Variable Capacity Management: Not Supported 00:08:47.350 Delete Endurance Group: Not Supported 00:08:47.350 Delete NVM Set: Not Supported 00:08:47.350 Extended LBA Formats Supported: Supported 00:08:47.350 Flexible Data Placement Supported: Not Supported 00:08:47.350 00:08:47.350 Controller Memory Buffer Support 00:08:47.350 ================================ 00:08:47.350 Supported: No 00:08:47.350 00:08:47.350 Persistent Memory Region Support 00:08:47.350 ================================ 00:08:47.350 Supported: No 00:08:47.350 00:08:47.350 Admin Command Set Attributes 00:08:47.350 ============================ 00:08:47.350 Security Send/Receive: Not Supported 00:08:47.350 Format NVM: Supported 00:08:47.350 Firmware Activate/Download: Not Supported 00:08:47.350 Namespace Management: Supported 00:08:47.350 Device Self-Test: Not Supported 00:08:47.350 Directives: Supported 00:08:47.350 NVMe-MI: Not Supported 00:08:47.350 Virtualization Management: Not Supported 00:08:47.350 Doorbell Buffer Config: Supported 00:08:47.350 Get LBA Status Capability: Not Supported 00:08:47.350 Command & Feature Lockdown Capability: Not Supported 00:08:47.350 Abort Command Limit: 4 00:08:47.350 Async Event Request Limit: 4 00:08:47.350 Number of Firmware Slots: N/A 00:08:47.350 Firmware Slot 1 Read-Only: N/A 00:08:47.350 Firmware Activation Without Reset: N/A 00:08:47.350 Multiple Update Detection Support: N/A 00:08:47.350 Firmware Update Granularity: No Information Provided 00:08:47.350 Per-Namespace SMART Log: Yes 00:08:47.350 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.350 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:47.350 Command Effects Log Page: Supported 00:08:47.350 Get Log Page Extended Data: Supported 00:08:47.350 Telemetry Log Pages: Not Supported 00:08:47.350 Persistent Event Log Pages: Not Supported 00:08:47.350 Supported Log Pages Log Page: May Support 00:08:47.350 Commands Supported & Effects Log Page: Not Supported 00:08:47.350 Feature Identifiers & Effects Log Page:May Support 00:08:47.350 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.350 Data Area 4 for Telemetry Log: Not Supported 00:08:47.350 Error Log Page Entries Supported: 1 00:08:47.350 Keep Alive: Not Supported 00:08:47.350 00:08:47.350 NVM Command Set Attributes 00:08:47.350 ========================== 00:08:47.350 Submission Queue Entry Size 00:08:47.350 Max: 64 00:08:47.350 Min: 64 00:08:47.350 Completion Queue Entry Size 00:08:47.350 Max: 16 00:08:47.350 Min: 16 00:08:47.350 Number of Namespaces: 256 00:08:47.350 Compare Command: Supported 00:08:47.350 Write Uncorrectable Command: Not Supported 00:08:47.350 Dataset Management Command: Supported 00:08:47.350 Write Zeroes Command: Supported 00:08:47.350 Set Features Save Field: Supported 00:08:47.350 Reservations: Not Supported 00:08:47.350 Timestamp: Supported 00:08:47.350 Copy: Supported 00:08:47.350 Volatile Write Cache: Present 00:08:47.350 Atomic Write Unit (Normal): 1 00:08:47.350 Atomic Write Unit (PFail): 1 00:08:47.350 Atomic Compare & Write Unit: 1 00:08:47.350 Fused Compare & Write: Not Supported 00:08:47.350 Scatter-Gather List 00:08:47.350 SGL Command Set: Supported 00:08:47.350 SGL Keyed: Not Supported 00:08:47.350 SGL Bit Bucket Descriptor: Not Supported 00:08:47.350 SGL Metadata Pointer: Not Supported 00:08:47.350 Oversized SGL: Not Supported 00:08:47.350 SGL Metadata Address: Not Supported 00:08:47.350 SGL Offset: Not Supported 00:08:47.350 Transport SGL Data Block: Not Supported 00:08:47.350 Replay Protected Memory Block: Not Supported 00:08:47.350 00:08:47.350 Firmware Slot Information 00:08:47.350 ========================= 00:08:47.350 Active slot: 1 00:08:47.350 Slot 1 Firmware Revision: 1.0 00:08:47.350 00:08:47.350 00:08:47.350 Commands Supported and Effects 00:08:47.350 ============================== 00:08:47.350 Admin Commands 00:08:47.350 -------------- 00:08:47.350 Delete I/O Submission Queue (00h): Supported 00:08:47.350 Create I/O Submission Queue (01h): Supported 00:08:47.350 Get Log Page (02h): Supported 00:08:47.350 Delete I/O Completion Queue (04h): Supported 00:08:47.350 Create I/O Completion Queue (05h): Supported 00:08:47.350 Identify (06h): Supported 00:08:47.350 Abort (08h): Supported 00:08:47.350 Set Features (09h): Supported 00:08:47.350 Get Features (0Ah): Supported 00:08:47.350 Asynchronous Event Request (0Ch): Supported 00:08:47.350 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.350 Directive Send (19h): Supported 00:08:47.350 Directive Receive (1Ah): Supported 00:08:47.350 Virtualization Management (1Ch): Supported 00:08:47.350 Doorbell Buffer Config (7Ch): Supported 00:08:47.350 Format NVM (80h): Supported LBA-Change 00:08:47.350 I/O Commands 00:08:47.350 ------------ 00:08:47.350 Flush (00h): Supported LBA-Change 00:08:47.350 Write (01h): Supported LBA-Change 00:08:47.350 Read (02h): Supported 00:08:47.350 Compare (05h): Supported 00:08:47.350 Write Zeroes (08h): Supported LBA-Change 00:08:47.350 Dataset Management (09h): Supported LBA-Change 00:08:47.350 Unknown (0Ch): Supported 00:08:47.350 Unknown (12h): Supported 00:08:47.350 Copy (19h): Supported LBA-Change 00:08:47.350 Unknown (1Dh): Supported LBA-Change 00:08:47.350 00:08:47.350 Error Log 00:08:47.350 ========= 00:08:47.350 00:08:47.350 Arbitration 00:08:47.350 =========== 00:08:47.350 Arbitration Burst: no limit 00:08:47.350 00:08:47.350 Power Management 00:08:47.350 ================ 00:08:47.350 Number of Power States: 1 00:08:47.351 Current Power State: Power State #0 00:08:47.351 Power State #0: 00:08:47.351 Max Power: 25.00 W 00:08:47.351 Non-Operational State: Operational 00:08:47.351 Entry Latency: 16 microseconds 00:08:47.351 Exit Latency: 4 microseconds 00:08:47.351 Relative Read Throughput: 0 00:08:47.351 Relative Read Latency: 0 00:08:47.351 Relative Write Throughput: 0 00:08:47.351 Relative Write Latency: 0 00:08:47.351 Idle Power: Not Reported 00:08:47.351 Active Power: Not Reported 00:08:47.351 Non-Operational Permissive Mode: Not Supported 00:08:47.351 00:08:47.351 Health Information 00:08:47.351 ================== 00:08:47.351 Critical Warnings: 00:08:47.351 Available Spare Space: OK 00:08:47.351 Temperature: OK 00:08:47.351 Device Reliability: OK 00:08:47.351 Read Only: No 00:08:47.351 Volatile Memory Backup: OK 00:08:47.351 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.351 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.351 Available Spare: 0% 00:08:47.351 Available Spare Threshold: 0% 00:08:47.351 Life Percentage Used: 0% 00:08:47.351 Data Units Read: 773 00:08:47.351 Data Units Written: 701 00:08:47.351 Host Read Commands: 37435 00:08:47.351 Host Write Commands: 37221 00:08:47.351 Controller Busy Time: 0 minutes 00:08:47.351 Power Cycles: 0 00:08:47.351 Power On Hours: 0 hours 00:08:47.351 Unsafe Shutdowns: 0 00:08:47.351 Unrecoverable Media Errors: 0 00:08:47.351 Lifetime Error Log Entries: 0 00:08:47.351 Warning Temperature Time: 0 minutes 00:08:47.351 Critical Temperature Time: 0 minutes 00:08:47.351 00:08:47.351 Number of Queues 00:08:47.351 ================ 00:08:47.351 Number of I/O Submission Queues: 64 00:08:47.351 Number of I/O Completion Queues: 64 00:08:47.351 00:08:47.351 ZNS Specific Controller Data 00:08:47.351 ============================ 00:08:47.351 Zone Append Size Limit: 0 00:08:47.351 00:08:47.351 00:08:47.351 Active Namespaces 00:08:47.351 ================= 00:08:47.351 Namespace ID:1 00:08:47.351 Error Recovery Timeout: Unlimited 00:08:47.351 Command Set Identifier: NVM (00h) 00:08:47.351 Deallocate: Supported 00:08:47.351 Deallocated/Unwritten Error: Supported 00:08:47.351 Deallocated Read Value: All 0x00 00:08:47.351 Deallocate in Write Zeroes: Not Supported 00:08:47.351 Deallocated Guard Field: 0xFFFF 00:08:47.351 Flush: Supported 00:08:47.351 Reservation: Not Supported 00:08:47.351 Metadata Transferred as: Separate Metadata Buffer 00:08:47.351 Namespace Sharing Capabilities: Private 00:08:47.351 Size (in LBAs): 1548666 (5GiB) 00:08:47.351 Capacity (in LBAs): 1548666 (5GiB) 00:08:47.351 Utilization (in LBAs): 1548666 (5GiB) 00:08:47.351 Thin Provisioning: Not Supported 00:08:47.351 Per-NS Atomic Units: No 00:08:47.351 Maximum Single Source Range Length: 128 00:08:47.351 Maximum Copy Length: 128 00:08:47.351 Maximum Source Range Count: 128 00:08:47.351 NGUID/EUI64 Never Reused: No 00:08:47.351 Namespace Write Protected: No 00:08:47.351 Number of LBA Formats: 8 00:08:47.351 Current LBA Format: LBA Format #07 00:08:47.351 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.351 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.351 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.351 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.351 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.351 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.351 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.351 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.351 00:08:47.351 NVM Specific Namespace Data 00:08:47.351 =========================== 00:08:47.351 Logical Block Storage Tag Mask: 0 00:08:47.351 Protection Information Capabilities: 00:08:47.351 16b Guard Protection Information Storage Tag Support: No 00:08:47.351 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.351 Storage Tag Check Read Support: No 00:08:47.351 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.351 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:47.351 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:47.611 ===================================================== 00:08:47.611 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:47.611 ===================================================== 00:08:47.611 Controller Capabilities/Features 00:08:47.611 ================================ 00:08:47.611 Vendor ID: 1b36 00:08:47.611 Subsystem Vendor ID: 1af4 00:08:47.611 Serial Number: 12341 00:08:47.611 Model Number: QEMU NVMe Ctrl 00:08:47.611 Firmware Version: 8.0.0 00:08:47.611 Recommended Arb Burst: 6 00:08:47.611 IEEE OUI Identifier: 00 54 52 00:08:47.611 Multi-path I/O 00:08:47.611 May have multiple subsystem ports: No 00:08:47.611 May have multiple controllers: No 00:08:47.611 Associated with SR-IOV VF: No 00:08:47.611 Max Data Transfer Size: 524288 00:08:47.611 Max Number of Namespaces: 256 00:08:47.611 Max Number of I/O Queues: 64 00:08:47.611 NVMe Specification Version (VS): 1.4 00:08:47.611 NVMe Specification Version (Identify): 1.4 00:08:47.611 Maximum Queue Entries: 2048 00:08:47.611 Contiguous Queues Required: Yes 00:08:47.611 Arbitration Mechanisms Supported 00:08:47.611 Weighted Round Robin: Not Supported 00:08:47.611 Vendor Specific: Not Supported 00:08:47.611 Reset Timeout: 7500 ms 00:08:47.611 Doorbell Stride: 4 bytes 00:08:47.611 NVM Subsystem Reset: Not Supported 00:08:47.611 Command Sets Supported 00:08:47.611 NVM Command Set: Supported 00:08:47.611 Boot Partition: Not Supported 00:08:47.611 Memory Page Size Minimum: 4096 bytes 00:08:47.611 Memory Page Size Maximum: 65536 bytes 00:08:47.611 Persistent Memory Region: Not Supported 00:08:47.611 Optional Asynchronous Events Supported 00:08:47.611 Namespace Attribute Notices: Supported 00:08:47.611 Firmware Activation Notices: Not Supported 00:08:47.611 ANA Change Notices: Not Supported 00:08:47.611 PLE Aggregate Log Change Notices: Not Supported 00:08:47.611 LBA Status Info Alert Notices: Not Supported 00:08:47.611 EGE Aggregate Log Change Notices: Not Supported 00:08:47.611 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.611 Zone Descriptor Change Notices: Not Supported 00:08:47.611 Discovery Log Change Notices: Not Supported 00:08:47.611 Controller Attributes 00:08:47.611 128-bit Host Identifier: Not Supported 00:08:47.611 Non-Operational Permissive Mode: Not Supported 00:08:47.611 NVM Sets: Not Supported 00:08:47.611 Read Recovery Levels: Not Supported 00:08:47.611 Endurance Groups: Not Supported 00:08:47.611 Predictable Latency Mode: Not Supported 00:08:47.611 Traffic Based Keep ALive: Not Supported 00:08:47.611 Namespace Granularity: Not Supported 00:08:47.611 SQ Associations: Not Supported 00:08:47.612 UUID List: Not Supported 00:08:47.612 Multi-Domain Subsystem: Not Supported 00:08:47.612 Fixed Capacity Management: Not Supported 00:08:47.612 Variable Capacity Management: Not Supported 00:08:47.612 Delete Endurance Group: Not Supported 00:08:47.612 Delete NVM Set: Not Supported 00:08:47.612 Extended LBA Formats Supported: Supported 00:08:47.612 Flexible Data Placement Supported: Not Supported 00:08:47.612 00:08:47.612 Controller Memory Buffer Support 00:08:47.612 ================================ 00:08:47.612 Supported: No 00:08:47.612 00:08:47.612 Persistent Memory Region Support 00:08:47.612 ================================ 00:08:47.612 Supported: No 00:08:47.612 00:08:47.612 Admin Command Set Attributes 00:08:47.612 ============================ 00:08:47.612 Security Send/Receive: Not Supported 00:08:47.612 Format NVM: Supported 00:08:47.612 Firmware Activate/Download: Not Supported 00:08:47.612 Namespace Management: Supported 00:08:47.612 Device Self-Test: Not Supported 00:08:47.612 Directives: Supported 00:08:47.612 NVMe-MI: Not Supported 00:08:47.612 Virtualization Management: Not Supported 00:08:47.612 Doorbell Buffer Config: Supported 00:08:47.612 Get LBA Status Capability: Not Supported 00:08:47.612 Command & Feature Lockdown Capability: Not Supported 00:08:47.612 Abort Command Limit: 4 00:08:47.612 Async Event Request Limit: 4 00:08:47.612 Number of Firmware Slots: N/A 00:08:47.612 Firmware Slot 1 Read-Only: N/A 00:08:47.612 Firmware Activation Without Reset: N/A 00:08:47.612 Multiple Update Detection Support: N/A 00:08:47.612 Firmware Update Granularity: No Information Provided 00:08:47.612 Per-Namespace SMART Log: Yes 00:08:47.612 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.612 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:47.612 Command Effects Log Page: Supported 00:08:47.612 Get Log Page Extended Data: Supported 00:08:47.612 Telemetry Log Pages: Not Supported 00:08:47.612 Persistent Event Log Pages: Not Supported 00:08:47.612 Supported Log Pages Log Page: May Support 00:08:47.612 Commands Supported & Effects Log Page: Not Supported 00:08:47.612 Feature Identifiers & Effects Log Page:May Support 00:08:47.612 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.612 Data Area 4 for Telemetry Log: Not Supported 00:08:47.612 Error Log Page Entries Supported: 1 00:08:47.612 Keep Alive: Not Supported 00:08:47.612 00:08:47.612 NVM Command Set Attributes 00:08:47.612 ========================== 00:08:47.612 Submission Queue Entry Size 00:08:47.612 Max: 64 00:08:47.612 Min: 64 00:08:47.612 Completion Queue Entry Size 00:08:47.612 Max: 16 00:08:47.612 Min: 16 00:08:47.612 Number of Namespaces: 256 00:08:47.612 Compare Command: Supported 00:08:47.612 Write Uncorrectable Command: Not Supported 00:08:47.612 Dataset Management Command: Supported 00:08:47.612 Write Zeroes Command: Supported 00:08:47.612 Set Features Save Field: Supported 00:08:47.612 Reservations: Not Supported 00:08:47.612 Timestamp: Supported 00:08:47.612 Copy: Supported 00:08:47.612 Volatile Write Cache: Present 00:08:47.612 Atomic Write Unit (Normal): 1 00:08:47.612 Atomic Write Unit (PFail): 1 00:08:47.612 Atomic Compare & Write Unit: 1 00:08:47.612 Fused Compare & Write: Not Supported 00:08:47.612 Scatter-Gather List 00:08:47.612 SGL Command Set: Supported 00:08:47.612 SGL Keyed: Not Supported 00:08:47.612 SGL Bit Bucket Descriptor: Not Supported 00:08:47.612 SGL Metadata Pointer: Not Supported 00:08:47.612 Oversized SGL: Not Supported 00:08:47.612 SGL Metadata Address: Not Supported 00:08:47.612 SGL Offset: Not Supported 00:08:47.612 Transport SGL Data Block: Not Supported 00:08:47.612 Replay Protected Memory Block: Not Supported 00:08:47.612 00:08:47.612 Firmware Slot Information 00:08:47.612 ========================= 00:08:47.612 Active slot: 1 00:08:47.612 Slot 1 Firmware Revision: 1.0 00:08:47.612 00:08:47.612 00:08:47.612 Commands Supported and Effects 00:08:47.612 ============================== 00:08:47.612 Admin Commands 00:08:47.612 -------------- 00:08:47.612 Delete I/O Submission Queue (00h): Supported 00:08:47.612 Create I/O Submission Queue (01h): Supported 00:08:47.612 Get Log Page (02h): Supported 00:08:47.612 Delete I/O Completion Queue (04h): Supported 00:08:47.612 Create I/O Completion Queue (05h): Supported 00:08:47.612 Identify (06h): Supported 00:08:47.612 Abort (08h): Supported 00:08:47.612 Set Features (09h): Supported 00:08:47.612 Get Features (0Ah): Supported 00:08:47.612 Asynchronous Event Request (0Ch): Supported 00:08:47.612 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.612 Directive Send (19h): Supported 00:08:47.612 Directive Receive (1Ah): Supported 00:08:47.612 Virtualization Management (1Ch): Supported 00:08:47.612 Doorbell Buffer Config (7Ch): Supported 00:08:47.612 Format NVM (80h): Supported LBA-Change 00:08:47.612 I/O Commands 00:08:47.612 ------------ 00:08:47.612 Flush (00h): Supported LBA-Change 00:08:47.612 Write (01h): Supported LBA-Change 00:08:47.612 Read (02h): Supported 00:08:47.612 Compare (05h): Supported 00:08:47.612 Write Zeroes (08h): Supported LBA-Change 00:08:47.612 Dataset Management (09h): Supported LBA-Change 00:08:47.612 Unknown (0Ch): Supported 00:08:47.612 Unknown (12h): Supported 00:08:47.612 Copy (19h): Supported LBA-Change 00:08:47.612 Unknown (1Dh): Supported LBA-Change 00:08:47.612 00:08:47.612 Error Log 00:08:47.612 ========= 00:08:47.612 00:08:47.612 Arbitration 00:08:47.612 =========== 00:08:47.612 Arbitration Burst: no limit 00:08:47.612 00:08:47.612 Power Management 00:08:47.612 ================ 00:08:47.612 Number of Power States: 1 00:08:47.612 Current Power State: Power State #0 00:08:47.612 Power State #0: 00:08:47.612 Max Power: 25.00 W 00:08:47.612 Non-Operational State: Operational 00:08:47.612 Entry Latency: 16 microseconds 00:08:47.612 Exit Latency: 4 microseconds 00:08:47.612 Relative Read Throughput: 0 00:08:47.612 Relative Read Latency: 0 00:08:47.612 Relative Write Throughput: 0 00:08:47.612 Relative Write Latency: 0 00:08:47.612 Idle Power: Not Reported 00:08:47.612 Active Power: Not Reported 00:08:47.612 Non-Operational Permissive Mode: Not Supported 00:08:47.612 00:08:47.612 Health Information 00:08:47.612 ================== 00:08:47.612 Critical Warnings: 00:08:47.612 Available Spare Space: OK 00:08:47.612 Temperature: OK 00:08:47.612 Device Reliability: OK 00:08:47.612 Read Only: No 00:08:47.612 Volatile Memory Backup: OK 00:08:47.612 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.612 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.612 Available Spare: 0% 00:08:47.612 Available Spare Threshold: 0% 00:08:47.612 Life Percentage Used: 0% 00:08:47.612 Data Units Read: 1157 00:08:47.612 Data Units Written: 1024 00:08:47.612 Host Read Commands: 55825 00:08:47.612 Host Write Commands: 54633 00:08:47.612 Controller Busy Time: 0 minutes 00:08:47.612 Power Cycles: 0 00:08:47.612 Power On Hours: 0 hours 00:08:47.612 Unsafe Shutdowns: 0 00:08:47.612 Unrecoverable Media Errors: 0 00:08:47.612 Lifetime Error Log Entries: 0 00:08:47.612 Warning Temperature Time: 0 minutes 00:08:47.612 Critical Temperature Time: 0 minutes 00:08:47.612 00:08:47.612 Number of Queues 00:08:47.612 ================ 00:08:47.612 Number of I/O Submission Queues: 64 00:08:47.612 Number of I/O Completion Queues: 64 00:08:47.612 00:08:47.612 ZNS Specific Controller Data 00:08:47.612 ============================ 00:08:47.612 Zone Append Size Limit: 0 00:08:47.612 00:08:47.612 00:08:47.612 Active Namespaces 00:08:47.612 ================= 00:08:47.612 Namespace ID:1 00:08:47.612 Error Recovery Timeout: Unlimited 00:08:47.612 Command Set Identifier: NVM (00h) 00:08:47.612 Deallocate: Supported 00:08:47.612 Deallocated/Unwritten Error: Supported 00:08:47.612 Deallocated Read Value: All 0x00 00:08:47.612 Deallocate in Write Zeroes: Not Supported 00:08:47.612 Deallocated Guard Field: 0xFFFF 00:08:47.612 Flush: Supported 00:08:47.612 Reservation: Not Supported 00:08:47.612 Namespace Sharing Capabilities: Private 00:08:47.612 Size (in LBAs): 1310720 (5GiB) 00:08:47.612 Capacity (in LBAs): 1310720 (5GiB) 00:08:47.612 Utilization (in LBAs): 1310720 (5GiB) 00:08:47.612 Thin Provisioning: Not Supported 00:08:47.612 Per-NS Atomic Units: No 00:08:47.612 Maximum Single Source Range Length: 128 00:08:47.612 Maximum Copy Length: 128 00:08:47.612 Maximum Source Range Count: 128 00:08:47.612 NGUID/EUI64 Never Reused: No 00:08:47.613 Namespace Write Protected: No 00:08:47.613 Number of LBA Formats: 8 00:08:47.613 Current LBA Format: LBA Format #04 00:08:47.613 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.613 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.613 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.613 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.613 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.613 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.613 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.613 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.613 00:08:47.613 NVM Specific Namespace Data 00:08:47.613 =========================== 00:08:47.613 Logical Block Storage Tag Mask: 0 00:08:47.613 Protection Information Capabilities: 00:08:47.613 16b Guard Protection Information Storage Tag Support: No 00:08:47.613 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.613 Storage Tag Check Read Support: No 00:08:47.613 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.613 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:47.613 17:43:14 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:47.873 ===================================================== 00:08:47.873 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:47.873 ===================================================== 00:08:47.873 Controller Capabilities/Features 00:08:47.873 ================================ 00:08:47.873 Vendor ID: 1b36 00:08:47.873 Subsystem Vendor ID: 1af4 00:08:47.873 Serial Number: 12342 00:08:47.873 Model Number: QEMU NVMe Ctrl 00:08:47.873 Firmware Version: 8.0.0 00:08:47.873 Recommended Arb Burst: 6 00:08:47.873 IEEE OUI Identifier: 00 54 52 00:08:47.873 Multi-path I/O 00:08:47.873 May have multiple subsystem ports: No 00:08:47.873 May have multiple controllers: No 00:08:47.873 Associated with SR-IOV VF: No 00:08:47.873 Max Data Transfer Size: 524288 00:08:47.873 Max Number of Namespaces: 256 00:08:47.873 Max Number of I/O Queues: 64 00:08:47.873 NVMe Specification Version (VS): 1.4 00:08:47.873 NVMe Specification Version (Identify): 1.4 00:08:47.873 Maximum Queue Entries: 2048 00:08:47.873 Contiguous Queues Required: Yes 00:08:47.873 Arbitration Mechanisms Supported 00:08:47.873 Weighted Round Robin: Not Supported 00:08:47.873 Vendor Specific: Not Supported 00:08:47.873 Reset Timeout: 7500 ms 00:08:47.873 Doorbell Stride: 4 bytes 00:08:47.873 NVM Subsystem Reset: Not Supported 00:08:47.873 Command Sets Supported 00:08:47.873 NVM Command Set: Supported 00:08:47.873 Boot Partition: Not Supported 00:08:47.873 Memory Page Size Minimum: 4096 bytes 00:08:47.873 Memory Page Size Maximum: 65536 bytes 00:08:47.873 Persistent Memory Region: Not Supported 00:08:47.873 Optional Asynchronous Events Supported 00:08:47.873 Namespace Attribute Notices: Supported 00:08:47.873 Firmware Activation Notices: Not Supported 00:08:47.873 ANA Change Notices: Not Supported 00:08:47.873 PLE Aggregate Log Change Notices: Not Supported 00:08:47.873 LBA Status Info Alert Notices: Not Supported 00:08:47.873 EGE Aggregate Log Change Notices: Not Supported 00:08:47.873 Normal NVM Subsystem Shutdown event: Not Supported 00:08:47.873 Zone Descriptor Change Notices: Not Supported 00:08:47.873 Discovery Log Change Notices: Not Supported 00:08:47.873 Controller Attributes 00:08:47.873 128-bit Host Identifier: Not Supported 00:08:47.873 Non-Operational Permissive Mode: Not Supported 00:08:47.873 NVM Sets: Not Supported 00:08:47.873 Read Recovery Levels: Not Supported 00:08:47.873 Endurance Groups: Not Supported 00:08:47.873 Predictable Latency Mode: Not Supported 00:08:47.873 Traffic Based Keep ALive: Not Supported 00:08:47.873 Namespace Granularity: Not Supported 00:08:47.873 SQ Associations: Not Supported 00:08:47.874 UUID List: Not Supported 00:08:47.874 Multi-Domain Subsystem: Not Supported 00:08:47.874 Fixed Capacity Management: Not Supported 00:08:47.874 Variable Capacity Management: Not Supported 00:08:47.874 Delete Endurance Group: Not Supported 00:08:47.874 Delete NVM Set: Not Supported 00:08:47.874 Extended LBA Formats Supported: Supported 00:08:47.874 Flexible Data Placement Supported: Not Supported 00:08:47.874 00:08:47.874 Controller Memory Buffer Support 00:08:47.874 ================================ 00:08:47.874 Supported: No 00:08:47.874 00:08:47.874 Persistent Memory Region Support 00:08:47.874 ================================ 00:08:47.874 Supported: No 00:08:47.874 00:08:47.874 Admin Command Set Attributes 00:08:47.874 ============================ 00:08:47.874 Security Send/Receive: Not Supported 00:08:47.874 Format NVM: Supported 00:08:47.874 Firmware Activate/Download: Not Supported 00:08:47.874 Namespace Management: Supported 00:08:47.874 Device Self-Test: Not Supported 00:08:47.874 Directives: Supported 00:08:47.874 NVMe-MI: Not Supported 00:08:47.874 Virtualization Management: Not Supported 00:08:47.874 Doorbell Buffer Config: Supported 00:08:47.874 Get LBA Status Capability: Not Supported 00:08:47.874 Command & Feature Lockdown Capability: Not Supported 00:08:47.874 Abort Command Limit: 4 00:08:47.874 Async Event Request Limit: 4 00:08:47.874 Number of Firmware Slots: N/A 00:08:47.874 Firmware Slot 1 Read-Only: N/A 00:08:47.874 Firmware Activation Without Reset: N/A 00:08:47.874 Multiple Update Detection Support: N/A 00:08:47.874 Firmware Update Granularity: No Information Provided 00:08:47.874 Per-Namespace SMART Log: Yes 00:08:47.874 Asymmetric Namespace Access Log Page: Not Supported 00:08:47.874 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:47.874 Command Effects Log Page: Supported 00:08:47.874 Get Log Page Extended Data: Supported 00:08:47.874 Telemetry Log Pages: Not Supported 00:08:47.874 Persistent Event Log Pages: Not Supported 00:08:47.874 Supported Log Pages Log Page: May Support 00:08:47.874 Commands Supported & Effects Log Page: Not Supported 00:08:47.874 Feature Identifiers & Effects Log Page:May Support 00:08:47.874 NVMe-MI Commands & Effects Log Page: May Support 00:08:47.874 Data Area 4 for Telemetry Log: Not Supported 00:08:47.874 Error Log Page Entries Supported: 1 00:08:47.874 Keep Alive: Not Supported 00:08:47.874 00:08:47.874 NVM Command Set Attributes 00:08:47.874 ========================== 00:08:47.874 Submission Queue Entry Size 00:08:47.874 Max: 64 00:08:47.874 Min: 64 00:08:47.874 Completion Queue Entry Size 00:08:47.874 Max: 16 00:08:47.874 Min: 16 00:08:47.874 Number of Namespaces: 256 00:08:47.874 Compare Command: Supported 00:08:47.874 Write Uncorrectable Command: Not Supported 00:08:47.874 Dataset Management Command: Supported 00:08:47.874 Write Zeroes Command: Supported 00:08:47.874 Set Features Save Field: Supported 00:08:47.874 Reservations: Not Supported 00:08:47.874 Timestamp: Supported 00:08:47.874 Copy: Supported 00:08:47.874 Volatile Write Cache: Present 00:08:47.874 Atomic Write Unit (Normal): 1 00:08:47.874 Atomic Write Unit (PFail): 1 00:08:47.874 Atomic Compare & Write Unit: 1 00:08:47.874 Fused Compare & Write: Not Supported 00:08:47.874 Scatter-Gather List 00:08:47.874 SGL Command Set: Supported 00:08:47.874 SGL Keyed: Not Supported 00:08:47.874 SGL Bit Bucket Descriptor: Not Supported 00:08:47.874 SGL Metadata Pointer: Not Supported 00:08:47.874 Oversized SGL: Not Supported 00:08:47.874 SGL Metadata Address: Not Supported 00:08:47.874 SGL Offset: Not Supported 00:08:47.874 Transport SGL Data Block: Not Supported 00:08:47.874 Replay Protected Memory Block: Not Supported 00:08:47.874 00:08:47.874 Firmware Slot Information 00:08:47.874 ========================= 00:08:47.874 Active slot: 1 00:08:47.874 Slot 1 Firmware Revision: 1.0 00:08:47.874 00:08:47.874 00:08:47.874 Commands Supported and Effects 00:08:47.874 ============================== 00:08:47.874 Admin Commands 00:08:47.874 -------------- 00:08:47.874 Delete I/O Submission Queue (00h): Supported 00:08:47.874 Create I/O Submission Queue (01h): Supported 00:08:47.874 Get Log Page (02h): Supported 00:08:47.874 Delete I/O Completion Queue (04h): Supported 00:08:47.874 Create I/O Completion Queue (05h): Supported 00:08:47.874 Identify (06h): Supported 00:08:47.874 Abort (08h): Supported 00:08:47.874 Set Features (09h): Supported 00:08:47.874 Get Features (0Ah): Supported 00:08:47.874 Asynchronous Event Request (0Ch): Supported 00:08:47.874 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:47.874 Directive Send (19h): Supported 00:08:47.874 Directive Receive (1Ah): Supported 00:08:47.874 Virtualization Management (1Ch): Supported 00:08:47.874 Doorbell Buffer Config (7Ch): Supported 00:08:47.874 Format NVM (80h): Supported LBA-Change 00:08:47.874 I/O Commands 00:08:47.874 ------------ 00:08:47.874 Flush (00h): Supported LBA-Change 00:08:47.874 Write (01h): Supported LBA-Change 00:08:47.874 Read (02h): Supported 00:08:47.874 Compare (05h): Supported 00:08:47.874 Write Zeroes (08h): Supported LBA-Change 00:08:47.874 Dataset Management (09h): Supported LBA-Change 00:08:47.874 Unknown (0Ch): Supported 00:08:47.874 Unknown (12h): Supported 00:08:47.874 Copy (19h): Supported LBA-Change 00:08:47.874 Unknown (1Dh): Supported LBA-Change 00:08:47.874 00:08:47.874 Error Log 00:08:47.874 ========= 00:08:47.874 00:08:47.874 Arbitration 00:08:47.874 =========== 00:08:47.874 Arbitration Burst: no limit 00:08:47.874 00:08:47.874 Power Management 00:08:47.874 ================ 00:08:47.874 Number of Power States: 1 00:08:47.874 Current Power State: Power State #0 00:08:47.874 Power State #0: 00:08:47.874 Max Power: 25.00 W 00:08:47.874 Non-Operational State: Operational 00:08:47.874 Entry Latency: 16 microseconds 00:08:47.874 Exit Latency: 4 microseconds 00:08:47.874 Relative Read Throughput: 0 00:08:47.874 Relative Read Latency: 0 00:08:47.874 Relative Write Throughput: 0 00:08:47.874 Relative Write Latency: 0 00:08:47.874 Idle Power: Not Reported 00:08:47.874 Active Power: Not Reported 00:08:47.874 Non-Operational Permissive Mode: Not Supported 00:08:47.874 00:08:47.874 Health Information 00:08:47.874 ================== 00:08:47.874 Critical Warnings: 00:08:47.874 Available Spare Space: OK 00:08:47.874 Temperature: OK 00:08:47.874 Device Reliability: OK 00:08:47.874 Read Only: No 00:08:47.874 Volatile Memory Backup: OK 00:08:47.874 Current Temperature: 323 Kelvin (50 Celsius) 00:08:47.874 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:47.874 Available Spare: 0% 00:08:47.874 Available Spare Threshold: 0% 00:08:47.874 Life Percentage Used: 0% 00:08:47.874 Data Units Read: 2458 00:08:47.874 Data Units Written: 2245 00:08:47.874 Host Read Commands: 114844 00:08:47.874 Host Write Commands: 113116 00:08:47.874 Controller Busy Time: 0 minutes 00:08:47.874 Power Cycles: 0 00:08:47.874 Power On Hours: 0 hours 00:08:47.874 Unsafe Shutdowns: 0 00:08:47.874 Unrecoverable Media Errors: 0 00:08:47.874 Lifetime Error Log Entries: 0 00:08:47.874 Warning Temperature Time: 0 minutes 00:08:47.874 Critical Temperature Time: 0 minutes 00:08:47.874 00:08:47.874 Number of Queues 00:08:47.874 ================ 00:08:47.874 Number of I/O Submission Queues: 64 00:08:47.874 Number of I/O Completion Queues: 64 00:08:47.874 00:08:47.874 ZNS Specific Controller Data 00:08:47.874 ============================ 00:08:47.874 Zone Append Size Limit: 0 00:08:47.874 00:08:47.874 00:08:47.874 Active Namespaces 00:08:47.874 ================= 00:08:47.874 Namespace ID:1 00:08:47.874 Error Recovery Timeout: Unlimited 00:08:47.874 Command Set Identifier: NVM (00h) 00:08:47.874 Deallocate: Supported 00:08:47.874 Deallocated/Unwritten Error: Supported 00:08:47.874 Deallocated Read Value: All 0x00 00:08:47.874 Deallocate in Write Zeroes: Not Supported 00:08:47.874 Deallocated Guard Field: 0xFFFF 00:08:47.874 Flush: Supported 00:08:47.874 Reservation: Not Supported 00:08:47.874 Namespace Sharing Capabilities: Private 00:08:47.874 Size (in LBAs): 1048576 (4GiB) 00:08:47.874 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.874 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.874 Thin Provisioning: Not Supported 00:08:47.874 Per-NS Atomic Units: No 00:08:47.874 Maximum Single Source Range Length: 128 00:08:47.874 Maximum Copy Length: 128 00:08:47.874 Maximum Source Range Count: 128 00:08:47.874 NGUID/EUI64 Never Reused: No 00:08:47.874 Namespace Write Protected: No 00:08:47.874 Number of LBA Formats: 8 00:08:47.874 Current LBA Format: LBA Format #04 00:08:47.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.875 00:08:47.875 NVM Specific Namespace Data 00:08:47.875 =========================== 00:08:47.875 Logical Block Storage Tag Mask: 0 00:08:47.875 Protection Information Capabilities: 00:08:47.875 16b Guard Protection Information Storage Tag Support: No 00:08:47.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.875 Storage Tag Check Read Support: No 00:08:47.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Namespace ID:2 00:08:47.875 Error Recovery Timeout: Unlimited 00:08:47.875 Command Set Identifier: NVM (00h) 00:08:47.875 Deallocate: Supported 00:08:47.875 Deallocated/Unwritten Error: Supported 00:08:47.875 Deallocated Read Value: All 0x00 00:08:47.875 Deallocate in Write Zeroes: Not Supported 00:08:47.875 Deallocated Guard Field: 0xFFFF 00:08:47.875 Flush: Supported 00:08:47.875 Reservation: Not Supported 00:08:47.875 Namespace Sharing Capabilities: Private 00:08:47.875 Size (in LBAs): 1048576 (4GiB) 00:08:47.875 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.875 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.875 Thin Provisioning: Not Supported 00:08:47.875 Per-NS Atomic Units: No 00:08:47.875 Maximum Single Source Range Length: 128 00:08:47.875 Maximum Copy Length: 128 00:08:47.875 Maximum Source Range Count: 128 00:08:47.875 NGUID/EUI64 Never Reused: No 00:08:47.875 Namespace Write Protected: No 00:08:47.875 Number of LBA Formats: 8 00:08:47.875 Current LBA Format: LBA Format #04 00:08:47.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.875 00:08:47.875 NVM Specific Namespace Data 00:08:47.875 =========================== 00:08:47.875 Logical Block Storage Tag Mask: 0 00:08:47.875 Protection Information Capabilities: 00:08:47.875 16b Guard Protection Information Storage Tag Support: No 00:08:47.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.875 Storage Tag Check Read Support: No 00:08:47.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Namespace ID:3 00:08:47.875 Error Recovery Timeout: Unlimited 00:08:47.875 Command Set Identifier: NVM (00h) 00:08:47.875 Deallocate: Supported 00:08:47.875 Deallocated/Unwritten Error: Supported 00:08:47.875 Deallocated Read Value: All 0x00 00:08:47.875 Deallocate in Write Zeroes: Not Supported 00:08:47.875 Deallocated Guard Field: 0xFFFF 00:08:47.875 Flush: Supported 00:08:47.875 Reservation: Not Supported 00:08:47.875 Namespace Sharing Capabilities: Private 00:08:47.875 Size (in LBAs): 1048576 (4GiB) 00:08:47.875 Capacity (in LBAs): 1048576 (4GiB) 00:08:47.875 Utilization (in LBAs): 1048576 (4GiB) 00:08:47.875 Thin Provisioning: Not Supported 00:08:47.875 Per-NS Atomic Units: No 00:08:47.875 Maximum Single Source Range Length: 128 00:08:47.875 Maximum Copy Length: 128 00:08:47.875 Maximum Source Range Count: 128 00:08:47.875 NGUID/EUI64 Never Reused: No 00:08:47.875 Namespace Write Protected: No 00:08:47.875 Number of LBA Formats: 8 00:08:47.875 Current LBA Format: LBA Format #04 00:08:47.875 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:47.875 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:47.875 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:47.875 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:47.875 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:47.875 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:47.875 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:47.875 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:47.875 00:08:47.875 NVM Specific Namespace Data 00:08:47.875 =========================== 00:08:47.875 Logical Block Storage Tag Mask: 0 00:08:47.875 Protection Information Capabilities: 00:08:47.875 16b Guard Protection Information Storage Tag Support: No 00:08:47.875 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:47.875 Storage Tag Check Read Support: No 00:08:47.875 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:47.875 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.134 17:43:15 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:48.134 17:43:15 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:48.394 ===================================================== 00:08:48.394 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:48.394 ===================================================== 00:08:48.394 Controller Capabilities/Features 00:08:48.394 ================================ 00:08:48.394 Vendor ID: 1b36 00:08:48.394 Subsystem Vendor ID: 1af4 00:08:48.394 Serial Number: 12343 00:08:48.394 Model Number: QEMU NVMe Ctrl 00:08:48.394 Firmware Version: 8.0.0 00:08:48.395 Recommended Arb Burst: 6 00:08:48.395 IEEE OUI Identifier: 00 54 52 00:08:48.395 Multi-path I/O 00:08:48.395 May have multiple subsystem ports: No 00:08:48.395 May have multiple controllers: Yes 00:08:48.395 Associated with SR-IOV VF: No 00:08:48.395 Max Data Transfer Size: 524288 00:08:48.395 Max Number of Namespaces: 256 00:08:48.395 Max Number of I/O Queues: 64 00:08:48.395 NVMe Specification Version (VS): 1.4 00:08:48.395 NVMe Specification Version (Identify): 1.4 00:08:48.395 Maximum Queue Entries: 2048 00:08:48.395 Contiguous Queues Required: Yes 00:08:48.395 Arbitration Mechanisms Supported 00:08:48.395 Weighted Round Robin: Not Supported 00:08:48.395 Vendor Specific: Not Supported 00:08:48.395 Reset Timeout: 7500 ms 00:08:48.395 Doorbell Stride: 4 bytes 00:08:48.395 NVM Subsystem Reset: Not Supported 00:08:48.395 Command Sets Supported 00:08:48.395 NVM Command Set: Supported 00:08:48.395 Boot Partition: Not Supported 00:08:48.395 Memory Page Size Minimum: 4096 bytes 00:08:48.395 Memory Page Size Maximum: 65536 bytes 00:08:48.395 Persistent Memory Region: Not Supported 00:08:48.395 Optional Asynchronous Events Supported 00:08:48.395 Namespace Attribute Notices: Supported 00:08:48.395 Firmware Activation Notices: Not Supported 00:08:48.395 ANA Change Notices: Not Supported 00:08:48.395 PLE Aggregate Log Change Notices: Not Supported 00:08:48.395 LBA Status Info Alert Notices: Not Supported 00:08:48.395 EGE Aggregate Log Change Notices: Not Supported 00:08:48.395 Normal NVM Subsystem Shutdown event: Not Supported 00:08:48.395 Zone Descriptor Change Notices: Not Supported 00:08:48.395 Discovery Log Change Notices: Not Supported 00:08:48.395 Controller Attributes 00:08:48.395 128-bit Host Identifier: Not Supported 00:08:48.395 Non-Operational Permissive Mode: Not Supported 00:08:48.395 NVM Sets: Not Supported 00:08:48.395 Read Recovery Levels: Not Supported 00:08:48.395 Endurance Groups: Supported 00:08:48.395 Predictable Latency Mode: Not Supported 00:08:48.395 Traffic Based Keep ALive: Not Supported 00:08:48.395 Namespace Granularity: Not Supported 00:08:48.395 SQ Associations: Not Supported 00:08:48.395 UUID List: Not Supported 00:08:48.395 Multi-Domain Subsystem: Not Supported 00:08:48.395 Fixed Capacity Management: Not Supported 00:08:48.395 Variable Capacity Management: Not Supported 00:08:48.395 Delete Endurance Group: Not Supported 00:08:48.395 Delete NVM Set: Not Supported 00:08:48.395 Extended LBA Formats Supported: Supported 00:08:48.395 Flexible Data Placement Supported: Supported 00:08:48.395 00:08:48.395 Controller Memory Buffer Support 00:08:48.395 ================================ 00:08:48.395 Supported: No 00:08:48.395 00:08:48.395 Persistent Memory Region Support 00:08:48.395 ================================ 00:08:48.395 Supported: No 00:08:48.395 00:08:48.395 Admin Command Set Attributes 00:08:48.395 ============================ 00:08:48.395 Security Send/Receive: Not Supported 00:08:48.395 Format NVM: Supported 00:08:48.395 Firmware Activate/Download: Not Supported 00:08:48.395 Namespace Management: Supported 00:08:48.395 Device Self-Test: Not Supported 00:08:48.395 Directives: Supported 00:08:48.395 NVMe-MI: Not Supported 00:08:48.395 Virtualization Management: Not Supported 00:08:48.395 Doorbell Buffer Config: Supported 00:08:48.395 Get LBA Status Capability: Not Supported 00:08:48.395 Command & Feature Lockdown Capability: Not Supported 00:08:48.395 Abort Command Limit: 4 00:08:48.395 Async Event Request Limit: 4 00:08:48.395 Number of Firmware Slots: N/A 00:08:48.395 Firmware Slot 1 Read-Only: N/A 00:08:48.395 Firmware Activation Without Reset: N/A 00:08:48.395 Multiple Update Detection Support: N/A 00:08:48.395 Firmware Update Granularity: No Information Provided 00:08:48.395 Per-Namespace SMART Log: Yes 00:08:48.395 Asymmetric Namespace Access Log Page: Not Supported 00:08:48.395 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:48.395 Command Effects Log Page: Supported 00:08:48.395 Get Log Page Extended Data: Supported 00:08:48.395 Telemetry Log Pages: Not Supported 00:08:48.395 Persistent Event Log Pages: Not Supported 00:08:48.395 Supported Log Pages Log Page: May Support 00:08:48.395 Commands Supported & Effects Log Page: Not Supported 00:08:48.395 Feature Identifiers & Effects Log Page:May Support 00:08:48.395 NVMe-MI Commands & Effects Log Page: May Support 00:08:48.395 Data Area 4 for Telemetry Log: Not Supported 00:08:48.395 Error Log Page Entries Supported: 1 00:08:48.395 Keep Alive: Not Supported 00:08:48.395 00:08:48.395 NVM Command Set Attributes 00:08:48.395 ========================== 00:08:48.395 Submission Queue Entry Size 00:08:48.395 Max: 64 00:08:48.395 Min: 64 00:08:48.395 Completion Queue Entry Size 00:08:48.395 Max: 16 00:08:48.395 Min: 16 00:08:48.395 Number of Namespaces: 256 00:08:48.395 Compare Command: Supported 00:08:48.395 Write Uncorrectable Command: Not Supported 00:08:48.395 Dataset Management Command: Supported 00:08:48.395 Write Zeroes Command: Supported 00:08:48.395 Set Features Save Field: Supported 00:08:48.395 Reservations: Not Supported 00:08:48.395 Timestamp: Supported 00:08:48.395 Copy: Supported 00:08:48.395 Volatile Write Cache: Present 00:08:48.395 Atomic Write Unit (Normal): 1 00:08:48.395 Atomic Write Unit (PFail): 1 00:08:48.395 Atomic Compare & Write Unit: 1 00:08:48.395 Fused Compare & Write: Not Supported 00:08:48.395 Scatter-Gather List 00:08:48.395 SGL Command Set: Supported 00:08:48.395 SGL Keyed: Not Supported 00:08:48.395 SGL Bit Bucket Descriptor: Not Supported 00:08:48.395 SGL Metadata Pointer: Not Supported 00:08:48.395 Oversized SGL: Not Supported 00:08:48.395 SGL Metadata Address: Not Supported 00:08:48.395 SGL Offset: Not Supported 00:08:48.395 Transport SGL Data Block: Not Supported 00:08:48.395 Replay Protected Memory Block: Not Supported 00:08:48.395 00:08:48.395 Firmware Slot Information 00:08:48.395 ========================= 00:08:48.395 Active slot: 1 00:08:48.395 Slot 1 Firmware Revision: 1.0 00:08:48.395 00:08:48.395 00:08:48.395 Commands Supported and Effects 00:08:48.395 ============================== 00:08:48.395 Admin Commands 00:08:48.395 -------------- 00:08:48.395 Delete I/O Submission Queue (00h): Supported 00:08:48.395 Create I/O Submission Queue (01h): Supported 00:08:48.395 Get Log Page (02h): Supported 00:08:48.395 Delete I/O Completion Queue (04h): Supported 00:08:48.395 Create I/O Completion Queue (05h): Supported 00:08:48.395 Identify (06h): Supported 00:08:48.395 Abort (08h): Supported 00:08:48.395 Set Features (09h): Supported 00:08:48.395 Get Features (0Ah): Supported 00:08:48.395 Asynchronous Event Request (0Ch): Supported 00:08:48.395 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:48.395 Directive Send (19h): Supported 00:08:48.395 Directive Receive (1Ah): Supported 00:08:48.395 Virtualization Management (1Ch): Supported 00:08:48.395 Doorbell Buffer Config (7Ch): Supported 00:08:48.395 Format NVM (80h): Supported LBA-Change 00:08:48.395 I/O Commands 00:08:48.395 ------------ 00:08:48.395 Flush (00h): Supported LBA-Change 00:08:48.395 Write (01h): Supported LBA-Change 00:08:48.395 Read (02h): Supported 00:08:48.395 Compare (05h): Supported 00:08:48.395 Write Zeroes (08h): Supported LBA-Change 00:08:48.395 Dataset Management (09h): Supported LBA-Change 00:08:48.395 Unknown (0Ch): Supported 00:08:48.395 Unknown (12h): Supported 00:08:48.395 Copy (19h): Supported LBA-Change 00:08:48.395 Unknown (1Dh): Supported LBA-Change 00:08:48.395 00:08:48.395 Error Log 00:08:48.395 ========= 00:08:48.395 00:08:48.395 Arbitration 00:08:48.395 =========== 00:08:48.395 Arbitration Burst: no limit 00:08:48.395 00:08:48.395 Power Management 00:08:48.395 ================ 00:08:48.395 Number of Power States: 1 00:08:48.395 Current Power State: Power State #0 00:08:48.395 Power State #0: 00:08:48.395 Max Power: 25.00 W 00:08:48.395 Non-Operational State: Operational 00:08:48.395 Entry Latency: 16 microseconds 00:08:48.395 Exit Latency: 4 microseconds 00:08:48.395 Relative Read Throughput: 0 00:08:48.395 Relative Read Latency: 0 00:08:48.395 Relative Write Throughput: 0 00:08:48.395 Relative Write Latency: 0 00:08:48.395 Idle Power: Not Reported 00:08:48.395 Active Power: Not Reported 00:08:48.395 Non-Operational Permissive Mode: Not Supported 00:08:48.395 00:08:48.395 Health Information 00:08:48.396 ================== 00:08:48.396 Critical Warnings: 00:08:48.396 Available Spare Space: OK 00:08:48.396 Temperature: OK 00:08:48.396 Device Reliability: OK 00:08:48.396 Read Only: No 00:08:48.396 Volatile Memory Backup: OK 00:08:48.396 Current Temperature: 323 Kelvin (50 Celsius) 00:08:48.396 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:48.396 Available Spare: 0% 00:08:48.396 Available Spare Threshold: 0% 00:08:48.396 Life Percentage Used: 0% 00:08:48.396 Data Units Read: 906 00:08:48.396 Data Units Written: 835 00:08:48.396 Host Read Commands: 39017 00:08:48.396 Host Write Commands: 38440 00:08:48.396 Controller Busy Time: 0 minutes 00:08:48.396 Power Cycles: 0 00:08:48.396 Power On Hours: 0 hours 00:08:48.396 Unsafe Shutdowns: 0 00:08:48.396 Unrecoverable Media Errors: 0 00:08:48.396 Lifetime Error Log Entries: 0 00:08:48.396 Warning Temperature Time: 0 minutes 00:08:48.396 Critical Temperature Time: 0 minutes 00:08:48.396 00:08:48.396 Number of Queues 00:08:48.396 ================ 00:08:48.396 Number of I/O Submission Queues: 64 00:08:48.396 Number of I/O Completion Queues: 64 00:08:48.396 00:08:48.396 ZNS Specific Controller Data 00:08:48.396 ============================ 00:08:48.396 Zone Append Size Limit: 0 00:08:48.396 00:08:48.396 00:08:48.396 Active Namespaces 00:08:48.396 ================= 00:08:48.396 Namespace ID:1 00:08:48.396 Error Recovery Timeout: Unlimited 00:08:48.396 Command Set Identifier: NVM (00h) 00:08:48.396 Deallocate: Supported 00:08:48.396 Deallocated/Unwritten Error: Supported 00:08:48.396 Deallocated Read Value: All 0x00 00:08:48.396 Deallocate in Write Zeroes: Not Supported 00:08:48.396 Deallocated Guard Field: 0xFFFF 00:08:48.396 Flush: Supported 00:08:48.396 Reservation: Not Supported 00:08:48.396 Namespace Sharing Capabilities: Multiple Controllers 00:08:48.396 Size (in LBAs): 262144 (1GiB) 00:08:48.396 Capacity (in LBAs): 262144 (1GiB) 00:08:48.396 Utilization (in LBAs): 262144 (1GiB) 00:08:48.396 Thin Provisioning: Not Supported 00:08:48.396 Per-NS Atomic Units: No 00:08:48.396 Maximum Single Source Range Length: 128 00:08:48.396 Maximum Copy Length: 128 00:08:48.396 Maximum Source Range Count: 128 00:08:48.396 NGUID/EUI64 Never Reused: No 00:08:48.396 Namespace Write Protected: No 00:08:48.396 Endurance group ID: 1 00:08:48.396 Number of LBA Formats: 8 00:08:48.396 Current LBA Format: LBA Format #04 00:08:48.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:48.396 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:48.396 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:48.396 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:48.396 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:48.396 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:48.396 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:48.396 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:48.396 00:08:48.396 Get Feature FDP: 00:08:48.396 ================ 00:08:48.396 Enabled: Yes 00:08:48.396 FDP configuration index: 0 00:08:48.396 00:08:48.396 FDP configurations log page 00:08:48.396 =========================== 00:08:48.396 Number of FDP configurations: 1 00:08:48.396 Version: 0 00:08:48.396 Size: 112 00:08:48.396 FDP Configuration Descriptor: 0 00:08:48.396 Descriptor Size: 96 00:08:48.396 Reclaim Group Identifier format: 2 00:08:48.396 FDP Volatile Write Cache: Not Present 00:08:48.396 FDP Configuration: Valid 00:08:48.396 Vendor Specific Size: 0 00:08:48.396 Number of Reclaim Groups: 2 00:08:48.396 Number of Recalim Unit Handles: 8 00:08:48.396 Max Placement Identifiers: 128 00:08:48.396 Number of Namespaces Suppprted: 256 00:08:48.396 Reclaim unit Nominal Size: 6000000 bytes 00:08:48.396 Estimated Reclaim Unit Time Limit: Not Reported 00:08:48.396 RUH Desc #000: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #001: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #002: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #003: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #004: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #005: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #006: RUH Type: Initially Isolated 00:08:48.396 RUH Desc #007: RUH Type: Initially Isolated 00:08:48.396 00:08:48.396 FDP reclaim unit handle usage log page 00:08:48.396 ====================================== 00:08:48.396 Number of Reclaim Unit Handles: 8 00:08:48.396 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:48.396 RUH Usage Desc #001: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #002: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #003: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #004: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #005: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #006: RUH Attributes: Unused 00:08:48.396 RUH Usage Desc #007: RUH Attributes: Unused 00:08:48.396 00:08:48.396 FDP statistics log page 00:08:48.396 ======================= 00:08:48.396 Host bytes with metadata written: 535666688 00:08:48.396 Media bytes with metadata written: 535822336 00:08:48.396 Media bytes erased: 0 00:08:48.396 00:08:48.396 FDP events log page 00:08:48.396 =================== 00:08:48.396 Number of FDP events: 0 00:08:48.396 00:08:48.396 NVM Specific Namespace Data 00:08:48.396 =========================== 00:08:48.396 Logical Block Storage Tag Mask: 0 00:08:48.396 Protection Information Capabilities: 00:08:48.396 16b Guard Protection Information Storage Tag Support: No 00:08:48.396 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:48.396 Storage Tag Check Read Support: No 00:08:48.396 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:48.396 00:08:48.396 real 0m1.831s 00:08:48.396 user 0m0.689s 00:08:48.396 sys 0m0.933s 00:08:48.396 17:43:15 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:48.396 17:43:15 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:48.396 ************************************ 00:08:48.396 END TEST nvme_identify 00:08:48.396 ************************************ 00:08:48.396 17:43:15 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:48.396 17:43:15 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:48.396 17:43:15 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:48.396 17:43:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:48.396 ************************************ 00:08:48.396 START TEST nvme_perf 00:08:48.396 ************************************ 00:08:48.396 17:43:15 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:08:48.396 17:43:15 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:49.775 Initializing NVMe Controllers 00:08:49.775 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:49.775 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:49.775 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:49.775 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:49.775 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:49.775 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:49.775 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:49.775 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:49.775 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:49.775 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:49.775 Initialization complete. Launching workers. 00:08:49.775 ======================================================== 00:08:49.775 Latency(us) 00:08:49.775 Device Information : IOPS MiB/s Average min max 00:08:49.775 PCIE (0000:00:10.0) NSID 1 from core 0: 13167.55 154.31 9745.64 7959.26 56027.98 00:08:49.775 PCIE (0000:00:11.0) NSID 1 from core 0: 13167.55 154.31 9731.69 8091.70 54380.74 00:08:49.775 PCIE (0000:00:13.0) NSID 1 from core 0: 13167.55 154.31 9716.26 8058.37 53083.58 00:08:49.775 PCIE (0000:00:12.0) NSID 1 from core 0: 13167.55 154.31 9700.54 8060.74 51284.73 00:08:49.775 PCIE (0000:00:12.0) NSID 2 from core 0: 13167.55 154.31 9684.90 8049.79 49488.31 00:08:49.775 PCIE (0000:00:12.0) NSID 3 from core 0: 13167.55 154.31 9668.20 8048.80 47646.03 00:08:49.775 ======================================================== 00:08:49.775 Total : 79005.32 925.84 9707.87 7959.26 56027.98 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8211.740us 00:08:49.775 10.00000% : 8474.937us 00:08:49.775 25.00000% : 8685.494us 00:08:49.775 50.00000% : 9053.969us 00:08:49.775 75.00000% : 9475.084us 00:08:49.775 90.00000% : 11001.626us 00:08:49.775 95.00000% : 12212.331us 00:08:49.775 98.00000% : 14107.348us 00:08:49.775 99.00000% : 15265.414us 00:08:49.775 99.50000% : 46533.192us 00:08:49.775 99.90000% : 55587.161us 00:08:49.775 99.99000% : 56008.276us 00:08:49.775 99.99900% : 56429.391us 00:08:49.775 99.99990% : 56429.391us 00:08:49.775 99.99999% : 56429.391us 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8264.379us 00:08:49.775 10.00000% : 8527.576us 00:08:49.775 25.00000% : 8738.133us 00:08:49.775 50.00000% : 9001.330us 00:08:49.775 75.00000% : 9422.445us 00:08:49.775 90.00000% : 11001.626us 00:08:49.775 95.00000% : 12212.331us 00:08:49.775 98.00000% : 14002.069us 00:08:49.775 99.00000% : 15581.250us 00:08:49.775 99.50000% : 45480.405us 00:08:49.775 99.90000% : 54323.817us 00:08:49.775 99.99000% : 54744.932us 00:08:49.775 99.99900% : 54744.932us 00:08:49.775 99.99990% : 54744.932us 00:08:49.775 99.99999% : 54744.932us 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8264.379us 00:08:49.775 10.00000% : 8527.576us 00:08:49.775 25.00000% : 8738.133us 00:08:49.775 50.00000% : 9001.330us 00:08:49.775 75.00000% : 9422.445us 00:08:49.775 90.00000% : 11054.265us 00:08:49.775 95.00000% : 12159.692us 00:08:49.775 98.00000% : 13686.233us 00:08:49.775 99.00000% : 15791.807us 00:08:49.775 99.50000% : 45059.290us 00:08:49.775 99.90000% : 52639.357us 00:08:49.775 99.99000% : 53060.472us 00:08:49.775 99.99900% : 53271.030us 00:08:49.775 99.99990% : 53271.030us 00:08:49.775 99.99999% : 53271.030us 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8264.379us 00:08:49.775 10.00000% : 8527.576us 00:08:49.775 25.00000% : 8738.133us 00:08:49.775 50.00000% : 9001.330us 00:08:49.775 75.00000% : 9422.445us 00:08:49.775 90.00000% : 11054.265us 00:08:49.775 95.00000% : 12107.052us 00:08:49.775 98.00000% : 13580.954us 00:08:49.775 99.00000% : 16212.922us 00:08:49.775 99.50000% : 43585.388us 00:08:49.775 99.90000% : 50954.898us 00:08:49.775 99.99000% : 51376.013us 00:08:49.775 99.99900% : 51376.013us 00:08:49.775 99.99990% : 51376.013us 00:08:49.775 99.99999% : 51376.013us 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8264.379us 00:08:49.775 10.00000% : 8527.576us 00:08:49.775 25.00000% : 8738.133us 00:08:49.775 50.00000% : 9001.330us 00:08:49.775 75.00000% : 9422.445us 00:08:49.775 90.00000% : 11001.626us 00:08:49.775 95.00000% : 12159.692us 00:08:49.775 98.00000% : 13475.676us 00:08:49.775 99.00000% : 16107.643us 00:08:49.775 99.50000% : 41900.929us 00:08:49.775 99.90000% : 49270.439us 00:08:49.775 99.99000% : 49480.996us 00:08:49.775 99.99900% : 49691.553us 00:08:49.775 99.99990% : 49691.553us 00:08:49.775 99.99999% : 49691.553us 00:08:49.775 00:08:49.775 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:49.775 ================================================================================= 00:08:49.775 1.00000% : 8264.379us 00:08:49.775 10.00000% : 8527.576us 00:08:49.775 25.00000% : 8738.133us 00:08:49.775 50.00000% : 9001.330us 00:08:49.775 75.00000% : 9422.445us 00:08:49.775 90.00000% : 11054.265us 00:08:49.775 95.00000% : 12159.692us 00:08:49.775 98.00000% : 13686.233us 00:08:49.775 99.00000% : 15686.529us 00:08:49.775 99.50000% : 40216.469us 00:08:49.775 99.90000% : 47375.422us 00:08:49.776 99.99000% : 47796.537us 00:08:49.776 99.99900% : 47796.537us 00:08:49.776 99.99990% : 47796.537us 00:08:49.776 99.99999% : 47796.537us 00:08:49.776 00:08:49.776 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:49.776 ============================================================================== 00:08:49.776 Range in us Cumulative IO count 00:08:49.776 7948.543 - 8001.182: 0.0379% ( 5) 00:08:49.776 8001.182 - 8053.822: 0.1441% ( 14) 00:08:49.776 8053.822 - 8106.461: 0.3489% ( 27) 00:08:49.776 8106.461 - 8159.100: 0.7661% ( 55) 00:08:49.776 8159.100 - 8211.740: 1.3956% ( 83) 00:08:49.776 8211.740 - 8264.379: 2.3817% ( 130) 00:08:49.776 8264.379 - 8317.018: 3.8152% ( 189) 00:08:49.776 8317.018 - 8369.658: 5.6811% ( 246) 00:08:49.776 8369.658 - 8422.297: 8.1766% ( 329) 00:08:49.776 8422.297 - 8474.937: 11.2409% ( 404) 00:08:49.776 8474.937 - 8527.576: 14.6162% ( 445) 00:08:49.776 8527.576 - 8580.215: 18.2494% ( 479) 00:08:49.776 8580.215 - 8632.855: 21.9964% ( 494) 00:08:49.776 8632.855 - 8685.494: 25.7661% ( 497) 00:08:49.776 8685.494 - 8738.133: 29.5889% ( 504) 00:08:49.776 8738.133 - 8790.773: 33.5482% ( 522) 00:08:49.776 8790.773 - 8843.412: 37.5607% ( 529) 00:08:49.776 8843.412 - 8896.051: 41.5731% ( 529) 00:08:49.776 8896.051 - 8948.691: 45.6159% ( 533) 00:08:49.776 8948.691 - 9001.330: 49.4918% ( 511) 00:08:49.776 9001.330 - 9053.969: 53.5801% ( 539) 00:08:49.776 9053.969 - 9106.609: 57.5243% ( 520) 00:08:49.776 9106.609 - 9159.248: 61.2106% ( 486) 00:08:49.776 9159.248 - 9211.888: 64.6921% ( 459) 00:08:49.776 9211.888 - 9264.527: 67.7336% ( 401) 00:08:49.776 9264.527 - 9317.166: 70.4490% ( 358) 00:08:49.776 9317.166 - 9369.806: 72.6032% ( 284) 00:08:49.776 9369.806 - 9422.445: 74.4235% ( 240) 00:08:49.776 9422.445 - 9475.084: 75.8268% ( 185) 00:08:49.776 9475.084 - 9527.724: 76.9873% ( 153) 00:08:49.776 9527.724 - 9580.363: 77.7988% ( 107) 00:08:49.776 9580.363 - 9633.002: 78.3829% ( 77) 00:08:49.776 9633.002 - 9685.642: 78.8304% ( 59) 00:08:49.776 9685.642 - 9738.281: 79.3310% ( 66) 00:08:49.776 9738.281 - 9790.920: 79.7406% ( 54) 00:08:49.776 9790.920 - 9843.560: 80.1881% ( 59) 00:08:49.776 9843.560 - 9896.199: 80.6432% ( 60) 00:08:49.776 9896.199 - 9948.839: 81.0604% ( 55) 00:08:49.776 9948.839 - 10001.478: 81.5686% ( 67) 00:08:49.776 10001.478 - 10054.117: 82.0995% ( 70) 00:08:49.776 10054.117 - 10106.757: 82.6760% ( 76) 00:08:49.776 10106.757 - 10159.396: 83.2373% ( 74) 00:08:49.776 10159.396 - 10212.035: 83.7454% ( 67) 00:08:49.776 10212.035 - 10264.675: 84.2081% ( 61) 00:08:49.776 10264.675 - 10317.314: 84.7012% ( 65) 00:08:49.776 10317.314 - 10369.953: 85.1562% ( 60) 00:08:49.776 10369.953 - 10422.593: 85.6948% ( 71) 00:08:49.776 10422.593 - 10475.232: 86.1044% ( 54) 00:08:49.776 10475.232 - 10527.871: 86.5671% ( 61) 00:08:49.776 10527.871 - 10580.511: 86.9615% ( 52) 00:08:49.776 10580.511 - 10633.150: 87.3559% ( 52) 00:08:49.776 10633.150 - 10685.790: 87.7882% ( 57) 00:08:49.776 10685.790 - 10738.429: 88.2054% ( 55) 00:08:49.776 10738.429 - 10791.068: 88.6529% ( 59) 00:08:49.776 10791.068 - 10843.708: 89.0701% ( 55) 00:08:49.776 10843.708 - 10896.347: 89.4342% ( 48) 00:08:49.776 10896.347 - 10948.986: 89.7300% ( 39) 00:08:49.776 10948.986 - 11001.626: 90.0485% ( 42) 00:08:49.776 11001.626 - 11054.265: 90.3216% ( 36) 00:08:49.776 11054.265 - 11106.904: 90.5795% ( 34) 00:08:49.776 11106.904 - 11159.544: 90.7995% ( 29) 00:08:49.776 11159.544 - 11212.183: 91.0118% ( 28) 00:08:49.776 11212.183 - 11264.822: 91.1939% ( 24) 00:08:49.776 11264.822 - 11317.462: 91.3911% ( 26) 00:08:49.776 11317.462 - 11370.101: 91.6186% ( 30) 00:08:49.776 11370.101 - 11422.741: 91.8689% ( 33) 00:08:49.776 11422.741 - 11475.380: 92.0661% ( 26) 00:08:49.776 11475.380 - 11528.019: 92.3392% ( 36) 00:08:49.776 11528.019 - 11580.659: 92.5440% ( 27) 00:08:49.776 11580.659 - 11633.298: 92.7715% ( 30) 00:08:49.776 11633.298 - 11685.937: 92.9384% ( 22) 00:08:49.776 11685.937 - 11738.577: 93.1204% ( 24) 00:08:49.776 11738.577 - 11791.216: 93.3404% ( 29) 00:08:49.776 11791.216 - 11843.855: 93.5604% ( 29) 00:08:49.776 11843.855 - 11896.495: 93.7424% ( 24) 00:08:49.776 11896.495 - 11949.134: 93.9927% ( 33) 00:08:49.776 11949.134 - 12001.773: 94.2127% ( 29) 00:08:49.776 12001.773 - 12054.413: 94.4326% ( 29) 00:08:49.776 12054.413 - 12107.052: 94.6299% ( 26) 00:08:49.776 12107.052 - 12159.692: 94.8195% ( 25) 00:08:49.776 12159.692 - 12212.331: 95.0394% ( 29) 00:08:49.776 12212.331 - 12264.970: 95.2139% ( 23) 00:08:49.776 12264.970 - 12317.610: 95.3732% ( 21) 00:08:49.776 12317.610 - 12370.249: 95.5552% ( 24) 00:08:49.776 12370.249 - 12422.888: 95.7373% ( 24) 00:08:49.776 12422.888 - 12475.528: 95.8510% ( 15) 00:08:49.776 12475.528 - 12528.167: 96.0255% ( 23) 00:08:49.776 12528.167 - 12580.806: 96.1544% ( 17) 00:08:49.776 12580.806 - 12633.446: 96.2985% ( 19) 00:08:49.776 12633.446 - 12686.085: 96.4199% ( 16) 00:08:49.776 12686.085 - 12738.724: 96.5261% ( 14) 00:08:49.776 12738.724 - 12791.364: 96.6399% ( 15) 00:08:49.776 12791.364 - 12844.003: 96.7309% ( 12) 00:08:49.776 12844.003 - 12896.643: 96.8143% ( 11) 00:08:49.776 12896.643 - 12949.282: 96.8750% ( 8) 00:08:49.776 12949.282 - 13001.921: 96.9053% ( 4) 00:08:49.776 13001.921 - 13054.561: 96.9281% ( 3) 00:08:49.776 13054.561 - 13107.200: 96.9736% ( 6) 00:08:49.776 13107.200 - 13159.839: 97.0191% ( 6) 00:08:49.776 13159.839 - 13212.479: 97.0419% ( 3) 00:08:49.776 13212.479 - 13265.118: 97.0874% ( 6) 00:08:49.776 13265.118 - 13317.757: 97.1632% ( 10) 00:08:49.776 13317.757 - 13370.397: 97.2163% ( 7) 00:08:49.776 13370.397 - 13423.036: 97.2770% ( 8) 00:08:49.776 13423.036 - 13475.676: 97.3529% ( 10) 00:08:49.776 13475.676 - 13580.954: 97.4894% ( 18) 00:08:49.776 13580.954 - 13686.233: 97.6183% ( 17) 00:08:49.776 13686.233 - 13791.512: 97.7093% ( 12) 00:08:49.776 13791.512 - 13896.790: 97.8307% ( 16) 00:08:49.776 13896.790 - 14002.069: 97.9293% ( 13) 00:08:49.776 14002.069 - 14107.348: 98.0355% ( 14) 00:08:49.776 14107.348 - 14212.627: 98.1189% ( 11) 00:08:49.776 14212.627 - 14317.905: 98.2327% ( 15) 00:08:49.776 14317.905 - 14423.184: 98.3541% ( 16) 00:08:49.776 14423.184 - 14528.463: 98.4603% ( 14) 00:08:49.776 14528.463 - 14633.741: 98.5589% ( 13) 00:08:49.776 14633.741 - 14739.020: 98.6650% ( 14) 00:08:49.776 14739.020 - 14844.299: 98.7864% ( 16) 00:08:49.776 14844.299 - 14949.578: 98.8774% ( 12) 00:08:49.776 14949.578 - 15054.856: 98.9533% ( 10) 00:08:49.776 15054.856 - 15160.135: 98.9836% ( 4) 00:08:49.776 15160.135 - 15265.414: 99.0064% ( 3) 00:08:49.776 15265.414 - 15370.692: 99.0291% ( 3) 00:08:49.776 44217.060 - 44427.618: 99.0595% ( 4) 00:08:49.776 44427.618 - 44638.175: 99.1050% ( 6) 00:08:49.776 44638.175 - 44848.733: 99.1505% ( 6) 00:08:49.776 44848.733 - 45059.290: 99.1960% ( 6) 00:08:49.776 45059.290 - 45269.847: 99.2415% ( 6) 00:08:49.776 45269.847 - 45480.405: 99.2870% ( 6) 00:08:49.776 45480.405 - 45690.962: 99.3325% ( 6) 00:08:49.776 45690.962 - 45901.520: 99.3780% ( 6) 00:08:49.776 45901.520 - 46112.077: 99.4235% ( 6) 00:08:49.776 46112.077 - 46322.635: 99.4766% ( 7) 00:08:49.776 46322.635 - 46533.192: 99.5146% ( 5) 00:08:49.776 53481.587 - 53692.145: 99.5449% ( 4) 00:08:49.776 53692.145 - 53902.702: 99.5904% ( 6) 00:08:49.776 53902.702 - 54323.817: 99.6814% ( 12) 00:08:49.776 54323.817 - 54744.932: 99.7421% ( 8) 00:08:49.776 54744.932 - 55166.047: 99.8255% ( 11) 00:08:49.776 55166.047 - 55587.161: 99.9166% ( 12) 00:08:49.776 55587.161 - 56008.276: 99.9924% ( 10) 00:08:49.776 56008.276 - 56429.391: 100.0000% ( 1) 00:08:49.776 00:08:49.776 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:49.776 ============================================================================== 00:08:49.776 Range in us Cumulative IO count 00:08:49.776 8053.822 - 8106.461: 0.0531% ( 7) 00:08:49.776 8106.461 - 8159.100: 0.2427% ( 25) 00:08:49.776 8159.100 - 8211.740: 0.5689% ( 43) 00:08:49.776 8211.740 - 8264.379: 1.1833% ( 81) 00:08:49.776 8264.379 - 8317.018: 2.0176% ( 110) 00:08:49.776 8317.018 - 8369.658: 3.3905% ( 181) 00:08:49.776 8369.658 - 8422.297: 5.2943% ( 251) 00:08:49.776 8422.297 - 8474.937: 7.8808% ( 341) 00:08:49.776 8474.937 - 8527.576: 11.0133% ( 413) 00:08:49.776 8527.576 - 8580.215: 14.8513% ( 506) 00:08:49.776 8580.215 - 8632.855: 18.8789% ( 531) 00:08:49.776 8632.855 - 8685.494: 23.3692% ( 592) 00:08:49.776 8685.494 - 8738.133: 27.9961% ( 610) 00:08:49.776 8738.133 - 8790.773: 32.6077% ( 608) 00:08:49.776 8790.773 - 8843.412: 37.3331% ( 623) 00:08:49.776 8843.412 - 8896.051: 42.0661% ( 624) 00:08:49.776 8896.051 - 8948.691: 46.6930% ( 610) 00:08:49.776 8948.691 - 9001.330: 51.2743% ( 604) 00:08:49.776 9001.330 - 9053.969: 55.7646% ( 592) 00:08:49.776 9053.969 - 9106.609: 59.9894% ( 557) 00:08:49.776 9106.609 - 9159.248: 63.9108% ( 517) 00:08:49.776 9159.248 - 9211.888: 67.3164% ( 449) 00:08:49.776 9211.888 - 9264.527: 70.1836% ( 378) 00:08:49.776 9264.527 - 9317.166: 72.4666% ( 301) 00:08:49.776 9317.166 - 9369.806: 74.2794% ( 239) 00:08:49.776 9369.806 - 9422.445: 75.6220% ( 177) 00:08:49.776 9422.445 - 9475.084: 76.6535% ( 136) 00:08:49.776 9475.084 - 9527.724: 77.3741% ( 95) 00:08:49.776 9527.724 - 9580.363: 77.9278% ( 73) 00:08:49.776 9580.363 - 9633.002: 78.4284% ( 66) 00:08:49.776 9633.002 - 9685.642: 78.8228% ( 52) 00:08:49.776 9685.642 - 9738.281: 79.2172% ( 52) 00:08:49.776 9738.281 - 9790.920: 79.6420% ( 56) 00:08:49.777 9790.920 - 9843.560: 80.1123% ( 62) 00:08:49.777 9843.560 - 9896.199: 80.5901% ( 63) 00:08:49.777 9896.199 - 9948.839: 81.1211% ( 70) 00:08:49.777 9948.839 - 10001.478: 81.6596% ( 71) 00:08:49.777 10001.478 - 10054.117: 82.2285% ( 75) 00:08:49.777 10054.117 - 10106.757: 82.7291% ( 66) 00:08:49.777 10106.757 - 10159.396: 83.3131% ( 77) 00:08:49.777 10159.396 - 10212.035: 83.8289% ( 68) 00:08:49.777 10212.035 - 10264.675: 84.3219% ( 65) 00:08:49.777 10264.675 - 10317.314: 84.8529% ( 70) 00:08:49.777 10317.314 - 10369.953: 85.3914% ( 71) 00:08:49.777 10369.953 - 10422.593: 85.8844% ( 65) 00:08:49.777 10422.593 - 10475.232: 86.3547% ( 62) 00:08:49.777 10475.232 - 10527.871: 86.8477% ( 65) 00:08:49.777 10527.871 - 10580.511: 87.3180% ( 62) 00:08:49.777 10580.511 - 10633.150: 87.7731% ( 60) 00:08:49.777 10633.150 - 10685.790: 88.1978% ( 56) 00:08:49.777 10685.790 - 10738.429: 88.6226% ( 56) 00:08:49.777 10738.429 - 10791.068: 88.9336% ( 41) 00:08:49.777 10791.068 - 10843.708: 89.2218% ( 38) 00:08:49.777 10843.708 - 10896.347: 89.5252% ( 40) 00:08:49.777 10896.347 - 10948.986: 89.8058% ( 37) 00:08:49.777 10948.986 - 11001.626: 90.0713% ( 35) 00:08:49.777 11001.626 - 11054.265: 90.3216% ( 33) 00:08:49.777 11054.265 - 11106.904: 90.5871% ( 35) 00:08:49.777 11106.904 - 11159.544: 90.7995% ( 28) 00:08:49.777 11159.544 - 11212.183: 90.9739% ( 23) 00:08:49.777 11212.183 - 11264.822: 91.1711% ( 26) 00:08:49.777 11264.822 - 11317.462: 91.3532% ( 24) 00:08:49.777 11317.462 - 11370.101: 91.5731% ( 29) 00:08:49.777 11370.101 - 11422.741: 91.7476% ( 23) 00:08:49.777 11422.741 - 11475.380: 91.9372% ( 25) 00:08:49.777 11475.380 - 11528.019: 92.1420% ( 27) 00:08:49.777 11528.019 - 11580.659: 92.3544% ( 28) 00:08:49.777 11580.659 - 11633.298: 92.5364% ( 24) 00:08:49.777 11633.298 - 11685.937: 92.7488% ( 28) 00:08:49.777 11685.937 - 11738.577: 92.9536% ( 27) 00:08:49.777 11738.577 - 11791.216: 93.1811% ( 30) 00:08:49.777 11791.216 - 11843.855: 93.4238% ( 32) 00:08:49.777 11843.855 - 11896.495: 93.6438% ( 29) 00:08:49.777 11896.495 - 11949.134: 93.8562% ( 28) 00:08:49.777 11949.134 - 12001.773: 94.1444% ( 38) 00:08:49.777 12001.773 - 12054.413: 94.4099% ( 35) 00:08:49.777 12054.413 - 12107.052: 94.6526% ( 32) 00:08:49.777 12107.052 - 12159.692: 94.9029% ( 33) 00:08:49.777 12159.692 - 12212.331: 95.1456% ( 32) 00:08:49.777 12212.331 - 12264.970: 95.3580% ( 28) 00:08:49.777 12264.970 - 12317.610: 95.5476% ( 25) 00:08:49.777 12317.610 - 12370.249: 95.7373% ( 25) 00:08:49.777 12370.249 - 12422.888: 95.9269% ( 25) 00:08:49.777 12422.888 - 12475.528: 96.0938% ( 22) 00:08:49.777 12475.528 - 12528.167: 96.2454% ( 20) 00:08:49.777 12528.167 - 12580.806: 96.3744% ( 17) 00:08:49.777 12580.806 - 12633.446: 96.4958% ( 16) 00:08:49.777 12633.446 - 12686.085: 96.6323% ( 18) 00:08:49.777 12686.085 - 12738.724: 96.7309% ( 13) 00:08:49.777 12738.724 - 12791.364: 96.7992% ( 9) 00:08:49.777 12791.364 - 12844.003: 96.8750% ( 10) 00:08:49.777 12844.003 - 12896.643: 96.9508% ( 10) 00:08:49.777 12896.643 - 12949.282: 97.0267% ( 10) 00:08:49.777 12949.282 - 13001.921: 97.0798% ( 7) 00:08:49.777 13001.921 - 13054.561: 97.1329% ( 7) 00:08:49.777 13054.561 - 13107.200: 97.1860% ( 7) 00:08:49.777 13107.200 - 13159.839: 97.2542% ( 9) 00:08:49.777 13159.839 - 13212.479: 97.2922% ( 5) 00:08:49.777 13212.479 - 13265.118: 97.3453% ( 7) 00:08:49.777 13265.118 - 13317.757: 97.4211% ( 10) 00:08:49.777 13317.757 - 13370.397: 97.4742% ( 7) 00:08:49.777 13370.397 - 13423.036: 97.5349% ( 8) 00:08:49.777 13423.036 - 13475.676: 97.5956% ( 8) 00:08:49.777 13475.676 - 13580.954: 97.7018% ( 14) 00:08:49.777 13580.954 - 13686.233: 97.7928% ( 12) 00:08:49.777 13686.233 - 13791.512: 97.8686% ( 10) 00:08:49.777 13791.512 - 13896.790: 97.9596% ( 12) 00:08:49.777 13896.790 - 14002.069: 98.0431% ( 11) 00:08:49.777 14002.069 - 14107.348: 98.1569% ( 15) 00:08:49.777 14107.348 - 14212.627: 98.2706% ( 15) 00:08:49.777 14212.627 - 14317.905: 98.3768% ( 14) 00:08:49.777 14317.905 - 14423.184: 98.4678% ( 12) 00:08:49.777 14423.184 - 14528.463: 98.5513% ( 11) 00:08:49.777 14528.463 - 14633.741: 98.6271% ( 10) 00:08:49.777 14633.741 - 14739.020: 98.7106% ( 11) 00:08:49.777 14739.020 - 14844.299: 98.7637% ( 7) 00:08:49.777 14844.299 - 14949.578: 98.8167% ( 7) 00:08:49.777 14949.578 - 15054.856: 98.8547% ( 5) 00:08:49.777 15054.856 - 15160.135: 98.9002% ( 6) 00:08:49.777 15160.135 - 15265.414: 98.9305% ( 4) 00:08:49.777 15265.414 - 15370.692: 98.9609% ( 4) 00:08:49.777 15370.692 - 15475.971: 98.9912% ( 4) 00:08:49.777 15475.971 - 15581.250: 99.0291% ( 5) 00:08:49.777 43164.273 - 43374.831: 99.0443% ( 2) 00:08:49.777 43374.831 - 43585.388: 99.0974% ( 7) 00:08:49.777 43585.388 - 43795.945: 99.1505% ( 7) 00:08:49.777 43795.945 - 44006.503: 99.2036% ( 7) 00:08:49.777 44006.503 - 44217.060: 99.2491% ( 6) 00:08:49.777 44217.060 - 44427.618: 99.2946% ( 6) 00:08:49.777 44427.618 - 44638.175: 99.3401% ( 6) 00:08:49.777 44638.175 - 44848.733: 99.3856% ( 6) 00:08:49.777 44848.733 - 45059.290: 99.4311% ( 6) 00:08:49.777 45059.290 - 45269.847: 99.4691% ( 5) 00:08:49.777 45269.847 - 45480.405: 99.5146% ( 6) 00:08:49.777 52007.685 - 52218.243: 99.5373% ( 3) 00:08:49.777 52218.243 - 52428.800: 99.5828% ( 6) 00:08:49.777 52428.800 - 52639.357: 99.6283% ( 6) 00:08:49.777 52639.357 - 52849.915: 99.6663% ( 5) 00:08:49.777 52849.915 - 53060.472: 99.7118% ( 6) 00:08:49.777 53060.472 - 53271.030: 99.7573% ( 6) 00:08:49.777 53271.030 - 53481.587: 99.7952% ( 5) 00:08:49.777 53481.587 - 53692.145: 99.8483% ( 7) 00:08:49.777 53692.145 - 53902.702: 99.8938% ( 6) 00:08:49.777 53902.702 - 54323.817: 99.9848% ( 12) 00:08:49.777 54323.817 - 54744.932: 100.0000% ( 2) 00:08:49.777 00:08:49.777 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:49.777 ============================================================================== 00:08:49.777 Range in us Cumulative IO count 00:08:49.777 8053.822 - 8106.461: 0.0455% ( 6) 00:08:49.777 8106.461 - 8159.100: 0.2124% ( 22) 00:08:49.777 8159.100 - 8211.740: 0.5765% ( 48) 00:08:49.777 8211.740 - 8264.379: 1.1302% ( 73) 00:08:49.777 8264.379 - 8317.018: 2.0176% ( 117) 00:08:49.777 8317.018 - 8369.658: 3.3146% ( 171) 00:08:49.777 8369.658 - 8422.297: 5.4308% ( 279) 00:08:49.777 8422.297 - 8474.937: 8.2448% ( 371) 00:08:49.777 8474.937 - 8527.576: 11.4078% ( 417) 00:08:49.777 8527.576 - 8580.215: 15.2458% ( 506) 00:08:49.777 8580.215 - 8632.855: 19.4402% ( 553) 00:08:49.777 8632.855 - 8685.494: 23.8319% ( 579) 00:08:49.777 8685.494 - 8738.133: 28.2539% ( 583) 00:08:49.777 8738.133 - 8790.773: 32.9111% ( 614) 00:08:49.777 8790.773 - 8843.412: 37.5986% ( 618) 00:08:49.777 8843.412 - 8896.051: 42.2178% ( 609) 00:08:49.777 8896.051 - 8948.691: 46.9736% ( 627) 00:08:49.777 8948.691 - 9001.330: 51.5701% ( 606) 00:08:49.777 9001.330 - 9053.969: 56.0604% ( 592) 00:08:49.777 9053.969 - 9106.609: 60.1790% ( 543) 00:08:49.777 9106.609 - 9159.248: 64.0473% ( 510) 00:08:49.777 9159.248 - 9211.888: 67.5061% ( 456) 00:08:49.777 9211.888 - 9264.527: 70.3883% ( 380) 00:08:49.777 9264.527 - 9317.166: 72.4970% ( 278) 00:08:49.777 9317.166 - 9369.806: 74.2946% ( 237) 00:08:49.777 9369.806 - 9422.445: 75.6599% ( 180) 00:08:49.777 9422.445 - 9475.084: 76.7521% ( 144) 00:08:49.777 9475.084 - 9527.724: 77.4954% ( 98) 00:08:49.777 9527.724 - 9580.363: 78.0416% ( 72) 00:08:49.777 9580.363 - 9633.002: 78.4663% ( 56) 00:08:49.777 9633.002 - 9685.642: 78.9138% ( 59) 00:08:49.777 9685.642 - 9738.281: 79.3234% ( 54) 00:08:49.777 9738.281 - 9790.920: 79.7482% ( 56) 00:08:49.777 9790.920 - 9843.560: 80.1729% ( 56) 00:08:49.777 9843.560 - 9896.199: 80.5977% ( 56) 00:08:49.777 9896.199 - 9948.839: 81.0149% ( 55) 00:08:49.777 9948.839 - 10001.478: 81.5231% ( 67) 00:08:49.777 10001.478 - 10054.117: 81.9933% ( 62) 00:08:49.777 10054.117 - 10106.757: 82.4788% ( 64) 00:08:49.777 10106.757 - 10159.396: 83.0021% ( 69) 00:08:49.777 10159.396 - 10212.035: 83.5634% ( 74) 00:08:49.777 10212.035 - 10264.675: 84.0868% ( 69) 00:08:49.777 10264.675 - 10317.314: 84.6253% ( 71) 00:08:49.777 10317.314 - 10369.953: 85.1790% ( 73) 00:08:49.777 10369.953 - 10422.593: 85.7403% ( 74) 00:08:49.777 10422.593 - 10475.232: 86.2864% ( 72) 00:08:49.777 10475.232 - 10527.871: 86.7643% ( 63) 00:08:49.777 10527.871 - 10580.511: 87.2042% ( 58) 00:08:49.777 10580.511 - 10633.150: 87.6214% ( 55) 00:08:49.777 10633.150 - 10685.790: 87.9930% ( 49) 00:08:49.777 10685.790 - 10738.429: 88.3040% ( 41) 00:08:49.777 10738.429 - 10791.068: 88.5771% ( 36) 00:08:49.777 10791.068 - 10843.708: 88.8805% ( 40) 00:08:49.777 10843.708 - 10896.347: 89.1535% ( 36) 00:08:49.777 10896.347 - 10948.986: 89.4417% ( 38) 00:08:49.777 10948.986 - 11001.626: 89.7679% ( 43) 00:08:49.777 11001.626 - 11054.265: 90.0182% ( 33) 00:08:49.777 11054.265 - 11106.904: 90.2761% ( 34) 00:08:49.777 11106.904 - 11159.544: 90.5340% ( 34) 00:08:49.777 11159.544 - 11212.183: 90.7919% ( 34) 00:08:49.777 11212.183 - 11264.822: 91.0422% ( 33) 00:08:49.777 11264.822 - 11317.462: 91.2773% ( 31) 00:08:49.777 11317.462 - 11370.101: 91.5352% ( 34) 00:08:49.777 11370.101 - 11422.741: 91.7931% ( 34) 00:08:49.777 11422.741 - 11475.380: 92.0586% ( 35) 00:08:49.777 11475.380 - 11528.019: 92.3316% ( 36) 00:08:49.777 11528.019 - 11580.659: 92.5819% ( 33) 00:08:49.777 11580.659 - 11633.298: 92.8322% ( 33) 00:08:49.777 11633.298 - 11685.937: 93.0598% ( 30) 00:08:49.777 11685.937 - 11738.577: 93.3101% ( 33) 00:08:49.777 11738.577 - 11791.216: 93.5225% ( 28) 00:08:49.778 11791.216 - 11843.855: 93.7576% ( 31) 00:08:49.778 11843.855 - 11896.495: 94.0003% ( 32) 00:08:49.778 11896.495 - 11949.134: 94.2582% ( 34) 00:08:49.778 11949.134 - 12001.773: 94.4933% ( 31) 00:08:49.778 12001.773 - 12054.413: 94.7588% ( 35) 00:08:49.778 12054.413 - 12107.052: 94.9939% ( 31) 00:08:49.778 12107.052 - 12159.692: 95.2367% ( 32) 00:08:49.778 12159.692 - 12212.331: 95.4642% ( 30) 00:08:49.778 12212.331 - 12264.970: 95.6538% ( 25) 00:08:49.778 12264.970 - 12317.610: 95.8434% ( 25) 00:08:49.778 12317.610 - 12370.249: 96.0027% ( 21) 00:08:49.778 12370.249 - 12422.888: 96.1468% ( 19) 00:08:49.778 12422.888 - 12475.528: 96.3061% ( 21) 00:08:49.778 12475.528 - 12528.167: 96.4199% ( 15) 00:08:49.778 12528.167 - 12580.806: 96.5413% ( 16) 00:08:49.778 12580.806 - 12633.446: 96.6778% ( 18) 00:08:49.778 12633.446 - 12686.085: 96.7992% ( 16) 00:08:49.778 12686.085 - 12738.724: 96.9129% ( 15) 00:08:49.778 12738.724 - 12791.364: 97.0343% ( 16) 00:08:49.778 12791.364 - 12844.003: 97.1253% ( 12) 00:08:49.778 12844.003 - 12896.643: 97.2239% ( 13) 00:08:49.778 12896.643 - 12949.282: 97.3073% ( 11) 00:08:49.778 12949.282 - 13001.921: 97.3908% ( 11) 00:08:49.778 13001.921 - 13054.561: 97.4742% ( 11) 00:08:49.778 13054.561 - 13107.200: 97.5349% ( 8) 00:08:49.778 13107.200 - 13159.839: 97.5956% ( 8) 00:08:49.778 13159.839 - 13212.479: 97.6487% ( 7) 00:08:49.778 13212.479 - 13265.118: 97.7093% ( 8) 00:08:49.778 13265.118 - 13317.757: 97.7776% ( 9) 00:08:49.778 13317.757 - 13370.397: 97.8231% ( 6) 00:08:49.778 13370.397 - 13423.036: 97.8838% ( 8) 00:08:49.778 13423.036 - 13475.676: 97.9217% ( 5) 00:08:49.778 13475.676 - 13580.954: 97.9976% ( 10) 00:08:49.778 13580.954 - 13686.233: 98.0507% ( 7) 00:08:49.778 13686.233 - 13791.512: 98.1113% ( 8) 00:08:49.778 13791.512 - 13896.790: 98.1644% ( 7) 00:08:49.778 13896.790 - 14002.069: 98.2175% ( 7) 00:08:49.778 14002.069 - 14107.348: 98.2630% ( 6) 00:08:49.778 14107.348 - 14212.627: 98.3313% ( 9) 00:08:49.778 14212.627 - 14317.905: 98.4072% ( 10) 00:08:49.778 14317.905 - 14423.184: 98.4906% ( 11) 00:08:49.778 14423.184 - 14528.463: 98.5589% ( 9) 00:08:49.778 14528.463 - 14633.741: 98.6195% ( 8) 00:08:49.778 14633.741 - 14739.020: 98.6726% ( 7) 00:08:49.778 14739.020 - 14844.299: 98.7333% ( 8) 00:08:49.778 14844.299 - 14949.578: 98.7788% ( 6) 00:08:49.778 14949.578 - 15054.856: 98.8092% ( 4) 00:08:49.778 15054.856 - 15160.135: 98.8395% ( 4) 00:08:49.778 15160.135 - 15265.414: 98.8774% ( 5) 00:08:49.778 15265.414 - 15370.692: 98.9078% ( 4) 00:08:49.778 15370.692 - 15475.971: 98.9305% ( 3) 00:08:49.778 15475.971 - 15581.250: 98.9609% ( 4) 00:08:49.778 15581.250 - 15686.529: 98.9988% ( 5) 00:08:49.778 15686.529 - 15791.807: 99.0291% ( 4) 00:08:49.778 42532.601 - 42743.158: 99.0443% ( 2) 00:08:49.778 42743.158 - 42953.716: 99.0898% ( 6) 00:08:49.778 42953.716 - 43164.273: 99.1277% ( 5) 00:08:49.778 43164.273 - 43374.831: 99.1732% ( 6) 00:08:49.778 43374.831 - 43585.388: 99.2188% ( 6) 00:08:49.778 43585.388 - 43795.945: 99.2643% ( 6) 00:08:49.778 43795.945 - 44006.503: 99.3098% ( 6) 00:08:49.778 44006.503 - 44217.060: 99.3553% ( 6) 00:08:49.778 44217.060 - 44427.618: 99.3932% ( 5) 00:08:49.778 44427.618 - 44638.175: 99.4387% ( 6) 00:08:49.778 44638.175 - 44848.733: 99.4842% ( 6) 00:08:49.778 44848.733 - 45059.290: 99.5146% ( 4) 00:08:49.778 50533.783 - 50744.341: 99.5221% ( 1) 00:08:49.778 50744.341 - 50954.898: 99.5525% ( 4) 00:08:49.778 50954.898 - 51165.455: 99.5980% ( 6) 00:08:49.778 51165.455 - 51376.013: 99.6359% ( 5) 00:08:49.778 51376.013 - 51586.570: 99.6814% ( 6) 00:08:49.778 51586.570 - 51797.128: 99.7194% ( 5) 00:08:49.778 51797.128 - 52007.685: 99.7573% ( 5) 00:08:49.778 52007.685 - 52218.243: 99.8104% ( 7) 00:08:49.778 52218.243 - 52428.800: 99.8559% ( 6) 00:08:49.778 52428.800 - 52639.357: 99.9014% ( 6) 00:08:49.778 52639.357 - 52849.915: 99.9469% ( 6) 00:08:49.778 52849.915 - 53060.472: 99.9924% ( 6) 00:08:49.778 53060.472 - 53271.030: 100.0000% ( 1) 00:08:49.778 00:08:49.778 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:49.778 ============================================================================== 00:08:49.778 Range in us Cumulative IO count 00:08:49.778 8053.822 - 8106.461: 0.0607% ( 8) 00:08:49.778 8106.461 - 8159.100: 0.2958% ( 31) 00:08:49.778 8159.100 - 8211.740: 0.6599% ( 48) 00:08:49.778 8211.740 - 8264.379: 1.2439% ( 77) 00:08:49.778 8264.379 - 8317.018: 2.1465% ( 119) 00:08:49.778 8317.018 - 8369.658: 3.5422% ( 184) 00:08:49.778 8369.658 - 8422.297: 5.5294% ( 262) 00:08:49.778 8422.297 - 8474.937: 8.2979% ( 365) 00:08:49.778 8474.937 - 8527.576: 11.4684% ( 418) 00:08:49.778 8527.576 - 8580.215: 15.1775% ( 489) 00:08:49.778 8580.215 - 8632.855: 19.1975% ( 530) 00:08:49.778 8632.855 - 8685.494: 23.7561% ( 601) 00:08:49.778 8685.494 - 8738.133: 28.3829% ( 610) 00:08:49.778 8738.133 - 8790.773: 32.9870% ( 607) 00:08:49.778 8790.773 - 8843.412: 37.9096% ( 649) 00:08:49.778 8843.412 - 8896.051: 42.6654% ( 627) 00:08:49.778 8896.051 - 8948.691: 47.3756% ( 621) 00:08:49.778 8948.691 - 9001.330: 51.9569% ( 604) 00:08:49.778 9001.330 - 9053.969: 56.3865% ( 584) 00:08:49.778 9053.969 - 9106.609: 60.5203% ( 545) 00:08:49.778 9106.609 - 9159.248: 64.3356% ( 503) 00:08:49.778 9159.248 - 9211.888: 67.7791% ( 454) 00:08:49.778 9211.888 - 9264.527: 70.4870% ( 357) 00:08:49.778 9264.527 - 9317.166: 72.6866% ( 290) 00:08:49.778 9317.166 - 9369.806: 74.5677% ( 248) 00:08:49.778 9369.806 - 9422.445: 75.9936% ( 188) 00:08:49.778 9422.445 - 9475.084: 76.9266% ( 123) 00:08:49.778 9475.084 - 9527.724: 77.4954% ( 75) 00:08:49.778 9527.724 - 9580.363: 77.9505% ( 60) 00:08:49.778 9580.363 - 9633.002: 78.4056% ( 60) 00:08:49.778 9633.002 - 9685.642: 78.8001% ( 52) 00:08:49.778 9685.642 - 9738.281: 79.1717% ( 49) 00:08:49.778 9738.281 - 9790.920: 79.5206% ( 46) 00:08:49.778 9790.920 - 9843.560: 79.9075% ( 51) 00:08:49.778 9843.560 - 9896.199: 80.3095% ( 53) 00:08:49.778 9896.199 - 9948.839: 80.7646% ( 60) 00:08:49.778 9948.839 - 10001.478: 81.2272% ( 61) 00:08:49.778 10001.478 - 10054.117: 81.7354% ( 67) 00:08:49.778 10054.117 - 10106.757: 82.1905% ( 60) 00:08:49.778 10106.757 - 10159.396: 82.7063% ( 68) 00:08:49.778 10159.396 - 10212.035: 83.2979% ( 78) 00:08:49.778 10212.035 - 10264.675: 83.8971% ( 79) 00:08:49.778 10264.675 - 10317.314: 84.4281% ( 70) 00:08:49.778 10317.314 - 10369.953: 84.9363% ( 67) 00:08:49.778 10369.953 - 10422.593: 85.4445% ( 67) 00:08:49.778 10422.593 - 10475.232: 85.9527% ( 67) 00:08:49.778 10475.232 - 10527.871: 86.4381% ( 64) 00:08:49.778 10527.871 - 10580.511: 86.8780% ( 58) 00:08:49.778 10580.511 - 10633.150: 87.3559% ( 63) 00:08:49.778 10633.150 - 10685.790: 87.7655% ( 54) 00:08:49.778 10685.790 - 10738.429: 88.2054% ( 58) 00:08:49.778 10738.429 - 10791.068: 88.6150% ( 54) 00:08:49.778 10791.068 - 10843.708: 88.9411% ( 43) 00:08:49.778 10843.708 - 10896.347: 89.2445% ( 40) 00:08:49.778 10896.347 - 10948.986: 89.5252% ( 37) 00:08:49.778 10948.986 - 11001.626: 89.8438% ( 42) 00:08:49.778 11001.626 - 11054.265: 90.1927% ( 46) 00:08:49.778 11054.265 - 11106.904: 90.4733% ( 37) 00:08:49.778 11106.904 - 11159.544: 90.7008% ( 30) 00:08:49.778 11159.544 - 11212.183: 90.9284% ( 30) 00:08:49.778 11212.183 - 11264.822: 91.1256% ( 26) 00:08:49.778 11264.822 - 11317.462: 91.3304% ( 27) 00:08:49.778 11317.462 - 11370.101: 91.5352% ( 27) 00:08:49.778 11370.101 - 11422.741: 91.7703% ( 31) 00:08:49.778 11422.741 - 11475.380: 92.0055% ( 31) 00:08:49.778 11475.380 - 11528.019: 92.2482% ( 32) 00:08:49.778 11528.019 - 11580.659: 92.5364% ( 38) 00:08:49.778 11580.659 - 11633.298: 92.8095% ( 36) 00:08:49.778 11633.298 - 11685.937: 93.1129% ( 40) 00:08:49.778 11685.937 - 11738.577: 93.3935% ( 37) 00:08:49.778 11738.577 - 11791.216: 93.6969% ( 40) 00:08:49.778 11791.216 - 11843.855: 93.9700% ( 36) 00:08:49.778 11843.855 - 11896.495: 94.2279% ( 34) 00:08:49.778 11896.495 - 11949.134: 94.4326% ( 27) 00:08:49.778 11949.134 - 12001.773: 94.6374% ( 27) 00:08:49.778 12001.773 - 12054.413: 94.8119% ( 23) 00:08:49.778 12054.413 - 12107.052: 95.0015% ( 25) 00:08:49.778 12107.052 - 12159.692: 95.2063% ( 27) 00:08:49.778 12159.692 - 12212.331: 95.4035% ( 26) 00:08:49.778 12212.331 - 12264.970: 95.6007% ( 26) 00:08:49.778 12264.970 - 12317.610: 95.7904% ( 25) 00:08:49.778 12317.610 - 12370.249: 95.9648% ( 23) 00:08:49.778 12370.249 - 12422.888: 96.1013% ( 18) 00:08:49.778 12422.888 - 12475.528: 96.2530% ( 20) 00:08:49.778 12475.528 - 12528.167: 96.3820% ( 17) 00:08:49.778 12528.167 - 12580.806: 96.5185% ( 18) 00:08:49.778 12580.806 - 12633.446: 96.6550% ( 18) 00:08:49.778 12633.446 - 12686.085: 96.7992% ( 19) 00:08:49.778 12686.085 - 12738.724: 96.9357% ( 18) 00:08:49.778 12738.724 - 12791.364: 97.0798% ( 19) 00:08:49.778 12791.364 - 12844.003: 97.2239% ( 19) 00:08:49.778 12844.003 - 12896.643: 97.3453% ( 16) 00:08:49.778 12896.643 - 12949.282: 97.4515% ( 14) 00:08:49.778 12949.282 - 13001.921: 97.5425% ( 12) 00:08:49.778 13001.921 - 13054.561: 97.6183% ( 10) 00:08:49.778 13054.561 - 13107.200: 97.6790% ( 8) 00:08:49.778 13107.200 - 13159.839: 97.7473% ( 9) 00:08:49.778 13159.839 - 13212.479: 97.7776% ( 4) 00:08:49.778 13212.479 - 13265.118: 97.8155% ( 5) 00:08:49.778 13265.118 - 13317.757: 97.8610% ( 6) 00:08:49.778 13317.757 - 13370.397: 97.9066% ( 6) 00:08:49.778 13370.397 - 13423.036: 97.9445% ( 5) 00:08:49.778 13423.036 - 13475.676: 97.9976% ( 7) 00:08:49.778 13475.676 - 13580.954: 98.0810% ( 11) 00:08:49.779 13580.954 - 13686.233: 98.1720% ( 12) 00:08:49.779 13686.233 - 13791.512: 98.2555% ( 11) 00:08:49.779 13791.512 - 13896.790: 98.3237% ( 9) 00:08:49.779 13896.790 - 14002.069: 98.3768% ( 7) 00:08:49.779 14002.069 - 14107.348: 98.4223% ( 6) 00:08:49.779 14107.348 - 14212.627: 98.4451% ( 3) 00:08:49.779 14212.627 - 14317.905: 98.4754% ( 4) 00:08:49.779 14317.905 - 14423.184: 98.4982% ( 3) 00:08:49.779 14423.184 - 14528.463: 98.5209% ( 3) 00:08:49.779 14528.463 - 14633.741: 98.5437% ( 3) 00:08:49.779 14739.020 - 14844.299: 98.5740% ( 4) 00:08:49.779 14844.299 - 14949.578: 98.6044% ( 4) 00:08:49.779 14949.578 - 15054.856: 98.6423% ( 5) 00:08:49.779 15054.856 - 15160.135: 98.6802% ( 5) 00:08:49.779 15160.135 - 15265.414: 98.7030% ( 3) 00:08:49.779 15265.414 - 15370.692: 98.7409% ( 5) 00:08:49.779 15370.692 - 15475.971: 98.7712% ( 4) 00:08:49.779 15475.971 - 15581.250: 98.8016% ( 4) 00:08:49.779 15581.250 - 15686.529: 98.8319% ( 4) 00:08:49.779 15686.529 - 15791.807: 98.8623% ( 4) 00:08:49.779 15791.807 - 15897.086: 98.8926% ( 4) 00:08:49.779 15897.086 - 16002.365: 98.9305% ( 5) 00:08:49.779 16002.365 - 16107.643: 98.9760% ( 6) 00:08:49.779 16107.643 - 16212.922: 99.0140% ( 5) 00:08:49.779 16212.922 - 16318.201: 99.0291% ( 2) 00:08:49.779 41058.699 - 41269.256: 99.0519% ( 3) 00:08:49.779 41269.256 - 41479.814: 99.1050% ( 7) 00:08:49.779 41479.814 - 41690.371: 99.1505% ( 6) 00:08:49.779 41690.371 - 41900.929: 99.1960% ( 6) 00:08:49.779 41900.929 - 42111.486: 99.2415% ( 6) 00:08:49.779 42111.486 - 42322.043: 99.2794% ( 5) 00:08:49.779 42322.043 - 42532.601: 99.3174% ( 5) 00:08:49.779 42532.601 - 42743.158: 99.3629% ( 6) 00:08:49.779 42743.158 - 42953.716: 99.4084% ( 6) 00:08:49.779 42953.716 - 43164.273: 99.4539% ( 6) 00:08:49.779 43164.273 - 43374.831: 99.4918% ( 5) 00:08:49.779 43374.831 - 43585.388: 99.5146% ( 3) 00:08:49.779 49059.881 - 49270.439: 99.5449% ( 4) 00:08:49.779 49270.439 - 49480.996: 99.5904% ( 6) 00:08:49.779 49480.996 - 49691.553: 99.6359% ( 6) 00:08:49.779 49691.553 - 49902.111: 99.6814% ( 6) 00:08:49.779 49902.111 - 50112.668: 99.7269% ( 6) 00:08:49.779 50112.668 - 50323.226: 99.7725% ( 6) 00:08:49.779 50323.226 - 50533.783: 99.8255% ( 7) 00:08:49.779 50533.783 - 50744.341: 99.8711% ( 6) 00:08:49.779 50744.341 - 50954.898: 99.9242% ( 7) 00:08:49.779 50954.898 - 51165.455: 99.9697% ( 6) 00:08:49.779 51165.455 - 51376.013: 100.0000% ( 4) 00:08:49.779 00:08:49.779 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:49.779 ============================================================================== 00:08:49.779 Range in us Cumulative IO count 00:08:49.779 8001.182 - 8053.822: 0.0228% ( 3) 00:08:49.779 8053.822 - 8106.461: 0.0607% ( 5) 00:08:49.779 8106.461 - 8159.100: 0.2806% ( 29) 00:08:49.779 8159.100 - 8211.740: 0.5689% ( 38) 00:08:49.779 8211.740 - 8264.379: 1.1150% ( 72) 00:08:49.779 8264.379 - 8317.018: 2.1465% ( 136) 00:08:49.779 8317.018 - 8369.658: 3.5042% ( 179) 00:08:49.779 8369.658 - 8422.297: 5.5901% ( 275) 00:08:49.779 8422.297 - 8474.937: 8.2979% ( 357) 00:08:49.779 8474.937 - 8527.576: 11.5974% ( 435) 00:08:49.779 8527.576 - 8580.215: 15.1547% ( 469) 00:08:49.779 8580.215 - 8632.855: 19.2658% ( 542) 00:08:49.779 8632.855 - 8685.494: 23.6650% ( 580) 00:08:49.779 8685.494 - 8738.133: 28.2539% ( 605) 00:08:49.779 8738.133 - 8790.773: 32.9490% ( 619) 00:08:49.779 8790.773 - 8843.412: 37.6138% ( 615) 00:08:49.779 8843.412 - 8896.051: 42.3771% ( 628) 00:08:49.779 8896.051 - 8948.691: 47.0950% ( 622) 00:08:49.779 8948.691 - 9001.330: 51.6914% ( 606) 00:08:49.779 9001.330 - 9053.969: 56.0376% ( 573) 00:08:49.779 9053.969 - 9106.609: 60.2473% ( 555) 00:08:49.779 9106.609 - 9159.248: 64.1308% ( 512) 00:08:49.779 9159.248 - 9211.888: 67.4985% ( 444) 00:08:49.779 9211.888 - 9264.527: 70.3049% ( 370) 00:08:49.779 9264.527 - 9317.166: 72.5273% ( 293) 00:08:49.779 9317.166 - 9369.806: 74.3629% ( 242) 00:08:49.779 9369.806 - 9422.445: 75.9026% ( 203) 00:08:49.779 9422.445 - 9475.084: 76.8659% ( 127) 00:08:49.779 9475.084 - 9527.724: 77.5410% ( 89) 00:08:49.779 9527.724 - 9580.363: 78.0340% ( 65) 00:08:49.779 9580.363 - 9633.002: 78.4891% ( 60) 00:08:49.779 9633.002 - 9685.642: 78.8532% ( 48) 00:08:49.779 9685.642 - 9738.281: 79.2248% ( 49) 00:08:49.779 9738.281 - 9790.920: 79.5055% ( 37) 00:08:49.779 9790.920 - 9843.560: 79.8771% ( 49) 00:08:49.779 9843.560 - 9896.199: 80.2336% ( 47) 00:08:49.779 9896.199 - 9948.839: 80.6280% ( 52) 00:08:49.779 9948.839 - 10001.478: 81.1059% ( 63) 00:08:49.779 10001.478 - 10054.117: 81.6217% ( 68) 00:08:49.779 10054.117 - 10106.757: 82.1526% ( 70) 00:08:49.779 10106.757 - 10159.396: 82.7139% ( 74) 00:08:49.779 10159.396 - 10212.035: 83.2828% ( 75) 00:08:49.779 10212.035 - 10264.675: 83.7985% ( 68) 00:08:49.779 10264.675 - 10317.314: 84.3902% ( 78) 00:08:49.779 10317.314 - 10369.953: 84.9059% ( 68) 00:08:49.779 10369.953 - 10422.593: 85.3914% ( 64) 00:08:49.779 10422.593 - 10475.232: 85.9072% ( 68) 00:08:49.779 10475.232 - 10527.871: 86.4229% ( 68) 00:08:49.779 10527.871 - 10580.511: 86.8856% ( 61) 00:08:49.779 10580.511 - 10633.150: 87.3938% ( 67) 00:08:49.779 10633.150 - 10685.790: 87.8641% ( 62) 00:08:49.779 10685.790 - 10738.429: 88.3647% ( 66) 00:08:49.779 10738.429 - 10791.068: 88.7970% ( 57) 00:08:49.779 10791.068 - 10843.708: 89.1611% ( 48) 00:08:49.779 10843.708 - 10896.347: 89.4417% ( 37) 00:08:49.779 10896.347 - 10948.986: 89.7451% ( 40) 00:08:49.779 10948.986 - 11001.626: 90.0334% ( 38) 00:08:49.779 11001.626 - 11054.265: 90.2685% ( 31) 00:08:49.779 11054.265 - 11106.904: 90.4961% ( 30) 00:08:49.779 11106.904 - 11159.544: 90.7084% ( 28) 00:08:49.779 11159.544 - 11212.183: 90.9663% ( 34) 00:08:49.779 11212.183 - 11264.822: 91.1635% ( 26) 00:08:49.779 11264.822 - 11317.462: 91.3987% ( 31) 00:08:49.779 11317.462 - 11370.101: 91.6414% ( 32) 00:08:49.779 11370.101 - 11422.741: 91.8765% ( 31) 00:08:49.779 11422.741 - 11475.380: 92.1117% ( 31) 00:08:49.779 11475.380 - 11528.019: 92.3316% ( 29) 00:08:49.779 11528.019 - 11580.659: 92.5364% ( 27) 00:08:49.779 11580.659 - 11633.298: 92.7412% ( 27) 00:08:49.779 11633.298 - 11685.937: 92.9688% ( 30) 00:08:49.779 11685.937 - 11738.577: 93.1811% ( 28) 00:08:49.779 11738.577 - 11791.216: 93.3935% ( 28) 00:08:49.779 11791.216 - 11843.855: 93.6666% ( 36) 00:08:49.779 11843.855 - 11896.495: 93.9396% ( 36) 00:08:49.779 11896.495 - 11949.134: 94.2051% ( 35) 00:08:49.779 11949.134 - 12001.773: 94.4402% ( 31) 00:08:49.779 12001.773 - 12054.413: 94.7057% ( 35) 00:08:49.779 12054.413 - 12107.052: 94.8650% ( 21) 00:08:49.779 12107.052 - 12159.692: 95.0546% ( 25) 00:08:49.779 12159.692 - 12212.331: 95.2367% ( 24) 00:08:49.779 12212.331 - 12264.970: 95.3883% ( 20) 00:08:49.779 12264.970 - 12317.610: 95.5704% ( 24) 00:08:49.779 12317.610 - 12370.249: 95.7524% ( 24) 00:08:49.779 12370.249 - 12422.888: 95.9421% ( 25) 00:08:49.779 12422.888 - 12475.528: 96.1089% ( 22) 00:08:49.779 12475.528 - 12528.167: 96.2758% ( 22) 00:08:49.779 12528.167 - 12580.806: 96.4427% ( 22) 00:08:49.779 12580.806 - 12633.446: 96.6095% ( 22) 00:08:49.779 12633.446 - 12686.085: 96.7461% ( 18) 00:08:49.779 12686.085 - 12738.724: 96.9129% ( 22) 00:08:49.779 12738.724 - 12791.364: 97.0267% ( 15) 00:08:49.779 12791.364 - 12844.003: 97.1253% ( 13) 00:08:49.779 12844.003 - 12896.643: 97.2542% ( 17) 00:08:49.779 12896.643 - 12949.282: 97.3453% ( 12) 00:08:49.779 12949.282 - 13001.921: 97.4363% ( 12) 00:08:49.779 13001.921 - 13054.561: 97.5121% ( 10) 00:08:49.779 13054.561 - 13107.200: 97.5956% ( 11) 00:08:49.779 13107.200 - 13159.839: 97.6790% ( 11) 00:08:49.779 13159.839 - 13212.479: 97.7397% ( 8) 00:08:49.779 13212.479 - 13265.118: 97.8079% ( 9) 00:08:49.779 13265.118 - 13317.757: 97.8610% ( 7) 00:08:49.779 13317.757 - 13370.397: 97.9217% ( 8) 00:08:49.779 13370.397 - 13423.036: 97.9748% ( 7) 00:08:49.779 13423.036 - 13475.676: 98.0431% ( 9) 00:08:49.779 13475.676 - 13580.954: 98.1569% ( 15) 00:08:49.779 13580.954 - 13686.233: 98.2403% ( 11) 00:08:49.779 13686.233 - 13791.512: 98.3086% ( 9) 00:08:49.779 13791.512 - 13896.790: 98.3617% ( 7) 00:08:49.779 13896.790 - 14002.069: 98.4223% ( 8) 00:08:49.779 14002.069 - 14107.348: 98.4830% ( 8) 00:08:49.779 14107.348 - 14212.627: 98.5133% ( 4) 00:08:49.779 14212.627 - 14317.905: 98.5589% ( 6) 00:08:49.779 14317.905 - 14423.184: 98.5892% ( 4) 00:08:49.779 14423.184 - 14528.463: 98.6120% ( 3) 00:08:49.779 14528.463 - 14633.741: 98.6423% ( 4) 00:08:49.779 14633.741 - 14739.020: 98.6726% ( 4) 00:08:49.779 14739.020 - 14844.299: 98.7030% ( 4) 00:08:49.779 14844.299 - 14949.578: 98.7333% ( 4) 00:08:49.779 14949.578 - 15054.856: 98.7561% ( 3) 00:08:49.779 15054.856 - 15160.135: 98.7788% ( 3) 00:08:49.779 15160.135 - 15265.414: 98.8016% ( 3) 00:08:49.779 15265.414 - 15370.692: 98.8319% ( 4) 00:08:49.779 15370.692 - 15475.971: 98.8547% ( 3) 00:08:49.779 15475.971 - 15581.250: 98.8774% ( 3) 00:08:49.779 15581.250 - 15686.529: 98.9002% ( 3) 00:08:49.779 15686.529 - 15791.807: 98.9305% ( 4) 00:08:49.779 15791.807 - 15897.086: 98.9533% ( 3) 00:08:49.779 15897.086 - 16002.365: 98.9836% ( 4) 00:08:49.779 16002.365 - 16107.643: 99.0140% ( 4) 00:08:49.779 16107.643 - 16212.922: 99.0215% ( 1) 00:08:49.779 16212.922 - 16318.201: 99.0291% ( 1) 00:08:49.779 39374.239 - 39584.797: 99.0443% ( 2) 00:08:49.779 39584.797 - 39795.354: 99.0898% ( 6) 00:08:49.779 39795.354 - 40005.912: 99.1353% ( 6) 00:08:49.779 40005.912 - 40216.469: 99.1808% ( 6) 00:08:49.780 40216.469 - 40427.027: 99.2188% ( 5) 00:08:49.780 40427.027 - 40637.584: 99.2643% ( 6) 00:08:49.780 40637.584 - 40848.141: 99.3022% ( 5) 00:08:49.780 40848.141 - 41058.699: 99.3553% ( 7) 00:08:49.780 41058.699 - 41269.256: 99.3932% ( 5) 00:08:49.780 41269.256 - 41479.814: 99.4387% ( 6) 00:08:49.780 41479.814 - 41690.371: 99.4842% ( 6) 00:08:49.780 41690.371 - 41900.929: 99.5146% ( 4) 00:08:49.780 47375.422 - 47585.979: 99.5601% ( 6) 00:08:49.780 47585.979 - 47796.537: 99.6056% ( 6) 00:08:49.780 47796.537 - 48007.094: 99.6511% ( 6) 00:08:49.780 48007.094 - 48217.651: 99.6966% ( 6) 00:08:49.780 48217.651 - 48428.209: 99.7421% ( 6) 00:08:49.780 48428.209 - 48638.766: 99.7876% ( 6) 00:08:49.780 48638.766 - 48849.324: 99.8407% ( 7) 00:08:49.780 48849.324 - 49059.881: 99.8938% ( 7) 00:08:49.780 49059.881 - 49270.439: 99.9393% ( 6) 00:08:49.780 49270.439 - 49480.996: 99.9924% ( 7) 00:08:49.780 49480.996 - 49691.553: 100.0000% ( 1) 00:08:49.780 00:08:49.780 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:49.780 ============================================================================== 00:08:49.780 Range in us Cumulative IO count 00:08:49.780 8001.182 - 8053.822: 0.0152% ( 2) 00:08:49.780 8053.822 - 8106.461: 0.0379% ( 3) 00:08:49.780 8106.461 - 8159.100: 0.2503% ( 28) 00:08:49.780 8159.100 - 8211.740: 0.6675% ( 55) 00:08:49.780 8211.740 - 8264.379: 1.1984% ( 70) 00:08:49.780 8264.379 - 8317.018: 2.1693% ( 128) 00:08:49.780 8317.018 - 8369.658: 3.4208% ( 165) 00:08:49.780 8369.658 - 8422.297: 5.3777% ( 258) 00:08:49.780 8422.297 - 8474.937: 8.0476% ( 352) 00:08:49.780 8474.937 - 8527.576: 11.3471% ( 435) 00:08:49.780 8527.576 - 8580.215: 15.1168% ( 497) 00:08:49.780 8580.215 - 8632.855: 19.0989% ( 525) 00:08:49.780 8632.855 - 8685.494: 23.6878% ( 605) 00:08:49.780 8685.494 - 8738.133: 28.2464% ( 601) 00:08:49.780 8738.133 - 8790.773: 32.8125% ( 602) 00:08:49.780 8790.773 - 8843.412: 37.5303% ( 622) 00:08:49.780 8843.412 - 8896.051: 42.1951% ( 615) 00:08:49.780 8896.051 - 8948.691: 46.9433% ( 626) 00:08:49.780 8948.691 - 9001.330: 51.4867% ( 599) 00:08:49.780 9001.330 - 9053.969: 55.8859% ( 580) 00:08:49.780 9053.969 - 9106.609: 60.0273% ( 546) 00:08:49.780 9106.609 - 9159.248: 63.9487% ( 517) 00:08:49.780 9159.248 - 9211.888: 67.3923% ( 454) 00:08:49.780 9211.888 - 9264.527: 70.3201% ( 386) 00:08:49.780 9264.527 - 9317.166: 72.6032% ( 301) 00:08:49.780 9317.166 - 9369.806: 74.4160% ( 239) 00:08:49.780 9369.806 - 9422.445: 75.8495% ( 189) 00:08:49.780 9422.445 - 9475.084: 76.8887% ( 137) 00:08:49.780 9475.084 - 9527.724: 77.5334% ( 85) 00:08:49.780 9527.724 - 9580.363: 77.9961% ( 61) 00:08:49.780 9580.363 - 9633.002: 78.3601% ( 48) 00:08:49.780 9633.002 - 9685.642: 78.7470% ( 51) 00:08:49.780 9685.642 - 9738.281: 79.0883% ( 45) 00:08:49.780 9738.281 - 9790.920: 79.5130% ( 56) 00:08:49.780 9790.920 - 9843.560: 79.8923% ( 50) 00:08:49.780 9843.560 - 9896.199: 80.2715% ( 50) 00:08:49.780 9896.199 - 9948.839: 80.7721% ( 66) 00:08:49.780 9948.839 - 10001.478: 81.2728% ( 66) 00:08:49.780 10001.478 - 10054.117: 81.7885% ( 68) 00:08:49.780 10054.117 - 10106.757: 82.2588% ( 62) 00:08:49.780 10106.757 - 10159.396: 82.7670% ( 67) 00:08:49.780 10159.396 - 10212.035: 83.2828% ( 68) 00:08:49.780 10212.035 - 10264.675: 83.7910% ( 67) 00:08:49.780 10264.675 - 10317.314: 84.2688% ( 63) 00:08:49.780 10317.314 - 10369.953: 84.7998% ( 70) 00:08:49.780 10369.953 - 10422.593: 85.3155% ( 68) 00:08:49.780 10422.593 - 10475.232: 85.8389% ( 69) 00:08:49.780 10475.232 - 10527.871: 86.3850% ( 72) 00:08:49.780 10527.871 - 10580.511: 86.9235% ( 71) 00:08:49.780 10580.511 - 10633.150: 87.4848% ( 74) 00:08:49.780 10633.150 - 10685.790: 88.0309% ( 72) 00:08:49.780 10685.790 - 10738.429: 88.5164% ( 64) 00:08:49.780 10738.429 - 10791.068: 88.8956% ( 50) 00:08:49.780 10791.068 - 10843.708: 89.2673% ( 49) 00:08:49.780 10843.708 - 10896.347: 89.5631% ( 39) 00:08:49.780 10896.347 - 10948.986: 89.7831% ( 29) 00:08:49.780 10948.986 - 11001.626: 89.9879% ( 27) 00:08:49.780 11001.626 - 11054.265: 90.2154% ( 30) 00:08:49.780 11054.265 - 11106.904: 90.4809% ( 35) 00:08:49.780 11106.904 - 11159.544: 90.7464% ( 35) 00:08:49.780 11159.544 - 11212.183: 90.9587% ( 28) 00:08:49.780 11212.183 - 11264.822: 91.1635% ( 27) 00:08:49.780 11264.822 - 11317.462: 91.3911% ( 30) 00:08:49.780 11317.462 - 11370.101: 91.6262% ( 31) 00:08:49.780 11370.101 - 11422.741: 91.8538% ( 30) 00:08:49.780 11422.741 - 11475.380: 92.0661% ( 28) 00:08:49.780 11475.380 - 11528.019: 92.3392% ( 36) 00:08:49.780 11528.019 - 11580.659: 92.5819% ( 32) 00:08:49.780 11580.659 - 11633.298: 92.8019% ( 29) 00:08:49.780 11633.298 - 11685.937: 93.0522% ( 33) 00:08:49.780 11685.937 - 11738.577: 93.2646% ( 28) 00:08:49.780 11738.577 - 11791.216: 93.4694% ( 27) 00:08:49.780 11791.216 - 11843.855: 93.7121% ( 32) 00:08:49.780 11843.855 - 11896.495: 93.9624% ( 33) 00:08:49.780 11896.495 - 11949.134: 94.1899% ( 30) 00:08:49.780 11949.134 - 12001.773: 94.4478% ( 34) 00:08:49.780 12001.773 - 12054.413: 94.6905% ( 32) 00:08:49.780 12054.413 - 12107.052: 94.9181% ( 30) 00:08:49.780 12107.052 - 12159.692: 95.1077% ( 25) 00:08:49.780 12159.692 - 12212.331: 95.3201% ( 28) 00:08:49.780 12212.331 - 12264.970: 95.4794% ( 21) 00:08:49.780 12264.970 - 12317.610: 95.6462% ( 22) 00:08:49.780 12317.610 - 12370.249: 95.8283% ( 24) 00:08:49.780 12370.249 - 12422.888: 95.9800% ( 20) 00:08:49.780 12422.888 - 12475.528: 96.1393% ( 21) 00:08:49.780 12475.528 - 12528.167: 96.2834% ( 19) 00:08:49.780 12528.167 - 12580.806: 96.3896% ( 14) 00:08:49.780 12580.806 - 12633.446: 96.5413% ( 20) 00:08:49.780 12633.446 - 12686.085: 96.7005% ( 21) 00:08:49.780 12686.085 - 12738.724: 96.8219% ( 16) 00:08:49.780 12738.724 - 12791.364: 96.9584% ( 18) 00:08:49.780 12791.364 - 12844.003: 97.0495% ( 12) 00:08:49.780 12844.003 - 12896.643: 97.1329% ( 11) 00:08:49.780 12896.643 - 12949.282: 97.1936% ( 8) 00:08:49.780 12949.282 - 13001.921: 97.2542% ( 8) 00:08:49.780 13001.921 - 13054.561: 97.3149% ( 8) 00:08:49.780 13054.561 - 13107.200: 97.3984% ( 11) 00:08:49.780 13107.200 - 13159.839: 97.4590% ( 8) 00:08:49.780 13159.839 - 13212.479: 97.5349% ( 10) 00:08:49.780 13212.479 - 13265.118: 97.6032% ( 9) 00:08:49.780 13265.118 - 13317.757: 97.6638% ( 8) 00:08:49.780 13317.757 - 13370.397: 97.7321% ( 9) 00:08:49.780 13370.397 - 13423.036: 97.8004% ( 9) 00:08:49.780 13423.036 - 13475.676: 97.8535% ( 7) 00:08:49.780 13475.676 - 13580.954: 97.9521% ( 13) 00:08:49.780 13580.954 - 13686.233: 98.0507% ( 13) 00:08:49.780 13686.233 - 13791.512: 98.1569% ( 14) 00:08:49.780 13791.512 - 13896.790: 98.2630% ( 14) 00:08:49.780 13896.790 - 14002.069: 98.3844% ( 16) 00:08:49.780 14002.069 - 14107.348: 98.4906% ( 14) 00:08:49.780 14107.348 - 14212.627: 98.5892% ( 13) 00:08:49.780 14212.627 - 14317.905: 98.6499% ( 8) 00:08:49.780 14317.905 - 14423.184: 98.7181% ( 9) 00:08:49.780 14423.184 - 14528.463: 98.7409% ( 3) 00:08:49.780 14528.463 - 14633.741: 98.7561% ( 2) 00:08:49.780 14633.741 - 14739.020: 98.7864% ( 4) 00:08:49.780 14739.020 - 14844.299: 98.8167% ( 4) 00:08:49.780 14844.299 - 14949.578: 98.8395% ( 3) 00:08:49.780 14949.578 - 15054.856: 98.8698% ( 4) 00:08:49.780 15054.856 - 15160.135: 98.8926% ( 3) 00:08:49.780 15160.135 - 15265.414: 98.9229% ( 4) 00:08:49.780 15265.414 - 15370.692: 98.9457% ( 3) 00:08:49.780 15370.692 - 15475.971: 98.9760% ( 4) 00:08:49.780 15475.971 - 15581.250: 98.9988% ( 3) 00:08:49.780 15581.250 - 15686.529: 99.0291% ( 4) 00:08:49.780 37900.337 - 38110.895: 99.0746% ( 6) 00:08:49.780 38110.895 - 38321.452: 99.1277% ( 7) 00:08:49.780 38321.452 - 38532.010: 99.1808% ( 7) 00:08:49.780 38532.010 - 38742.567: 99.2263% ( 6) 00:08:49.781 38742.567 - 38953.124: 99.2643% ( 5) 00:08:49.781 38953.124 - 39163.682: 99.3022% ( 5) 00:08:49.781 39163.682 - 39374.239: 99.3553% ( 7) 00:08:49.781 39374.239 - 39584.797: 99.4008% ( 6) 00:08:49.781 39584.797 - 39795.354: 99.4387% ( 5) 00:08:49.781 39795.354 - 40005.912: 99.4918% ( 7) 00:08:49.781 40005.912 - 40216.469: 99.5146% ( 3) 00:08:49.781 45480.405 - 45690.962: 99.5525% ( 5) 00:08:49.781 45690.962 - 45901.520: 99.6056% ( 7) 00:08:49.781 45901.520 - 46112.077: 99.6511% ( 6) 00:08:49.781 46112.077 - 46322.635: 99.6890% ( 5) 00:08:49.781 46322.635 - 46533.192: 99.7345% ( 6) 00:08:49.781 46533.192 - 46743.749: 99.7876% ( 7) 00:08:49.781 46743.749 - 46954.307: 99.8407% ( 7) 00:08:49.781 46954.307 - 47164.864: 99.8862% ( 6) 00:08:49.781 47164.864 - 47375.422: 99.9317% ( 6) 00:08:49.781 47375.422 - 47585.979: 99.9848% ( 7) 00:08:49.781 47585.979 - 47796.537: 100.0000% ( 2) 00:08:49.781 00:08:49.781 17:43:16 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:51.189 Initializing NVMe Controllers 00:08:51.189 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:51.189 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:51.189 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:51.189 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:51.189 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:51.189 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:51.189 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:51.189 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:51.189 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:51.189 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:51.189 Initialization complete. Launching workers. 00:08:51.189 ======================================================== 00:08:51.189 Latency(us) 00:08:51.189 Device Information : IOPS MiB/s Average min max 00:08:51.189 PCIE (0000:00:10.0) NSID 1 from core 0: 10082.76 118.16 12787.35 8387.63 44654.96 00:08:51.189 PCIE (0000:00:11.0) NSID 1 from core 0: 10082.76 118.16 12782.71 8610.44 43039.97 00:08:51.189 PCIE (0000:00:13.0) NSID 1 from core 0: 10082.76 118.16 12779.96 8660.04 42161.06 00:08:51.189 PCIE (0000:00:12.0) NSID 1 from core 0: 10082.76 118.16 12774.62 8686.43 40371.19 00:08:51.189 PCIE (0000:00:12.0) NSID 2 from core 0: 10146.57 118.91 12683.34 8738.36 32193.22 00:08:51.189 PCIE (0000:00:12.0) NSID 3 from core 0: 10146.57 118.91 12670.68 8692.04 30630.84 00:08:51.189 ======================================================== 00:08:51.189 Total : 60624.19 710.44 12746.30 8387.63 44654.96 00:08:51.189 00:08:51.189 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:51.189 ================================================================================= 00:08:51.189 1.00000% : 8896.051us 00:08:51.189 10.00000% : 9633.002us 00:08:51.189 25.00000% : 10106.757us 00:08:51.189 50.00000% : 10948.986us 00:08:51.189 75.00000% : 14949.578us 00:08:51.189 90.00000% : 18107.939us 00:08:51.189 95.00000% : 19160.726us 00:08:51.189 98.00000% : 21266.300us 00:08:51.189 99.00000% : 36215.878us 00:08:51.189 99.50000% : 43374.831us 00:08:51.189 99.90000% : 44427.618us 00:08:51.189 99.99000% : 44638.175us 00:08:51.189 99.99900% : 44848.733us 00:08:51.189 99.99990% : 44848.733us 00:08:51.189 99.99999% : 44848.733us 00:08:51.189 00:08:51.189 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:51.189 ================================================================================= 00:08:51.190 1.00000% : 8948.691us 00:08:51.190 10.00000% : 9633.002us 00:08:51.190 25.00000% : 10106.757us 00:08:51.190 50.00000% : 10896.347us 00:08:51.190 75.00000% : 15160.135us 00:08:51.190 90.00000% : 18107.939us 00:08:51.190 95.00000% : 19160.726us 00:08:51.190 98.00000% : 21161.022us 00:08:51.190 99.00000% : 34320.861us 00:08:51.190 99.50000% : 41690.371us 00:08:51.190 99.90000% : 42953.716us 00:08:51.190 99.99000% : 43164.273us 00:08:51.190 99.99900% : 43164.273us 00:08:51.190 99.99990% : 43164.273us 00:08:51.190 99.99999% : 43164.273us 00:08:51.190 00:08:51.190 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:51.190 ================================================================================= 00:08:51.190 1.00000% : 9001.330us 00:08:51.190 10.00000% : 9685.642us 00:08:51.190 25.00000% : 10106.757us 00:08:51.190 50.00000% : 10948.986us 00:08:51.190 75.00000% : 15160.135us 00:08:51.190 90.00000% : 18107.939us 00:08:51.190 95.00000% : 18739.611us 00:08:51.190 98.00000% : 21161.022us 00:08:51.190 99.00000% : 33689.189us 00:08:51.190 99.50000% : 40848.141us 00:08:51.190 99.90000% : 41900.929us 00:08:51.190 99.99000% : 42322.043us 00:08:51.190 99.99900% : 42322.043us 00:08:51.190 99.99990% : 42322.043us 00:08:51.190 99.99999% : 42322.043us 00:08:51.190 00:08:51.190 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:51.190 ================================================================================= 00:08:51.190 1.00000% : 9001.330us 00:08:51.190 10.00000% : 9633.002us 00:08:51.190 25.00000% : 10106.757us 00:08:51.190 50.00000% : 11001.626us 00:08:51.190 75.00000% : 15160.135us 00:08:51.190 90.00000% : 18002.660us 00:08:51.190 95.00000% : 18844.890us 00:08:51.190 98.00000% : 20739.907us 00:08:51.190 99.00000% : 31794.172us 00:08:51.190 99.50000% : 38953.124us 00:08:51.190 99.90000% : 40216.469us 00:08:51.190 99.99000% : 40427.027us 00:08:51.190 99.99900% : 40427.027us 00:08:51.190 99.99990% : 40427.027us 00:08:51.190 99.99999% : 40427.027us 00:08:51.190 00:08:51.190 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:51.190 ================================================================================= 00:08:51.190 1.00000% : 9001.330us 00:08:51.190 10.00000% : 9633.002us 00:08:51.190 25.00000% : 10106.757us 00:08:51.190 50.00000% : 11054.265us 00:08:51.190 75.00000% : 15160.135us 00:08:51.190 90.00000% : 17897.382us 00:08:51.190 95.00000% : 19266.005us 00:08:51.190 98.00000% : 21476.858us 00:08:51.190 99.00000% : 24529.941us 00:08:51.190 99.50000% : 30951.942us 00:08:51.190 99.90000% : 32004.729us 00:08:51.190 99.99000% : 32215.287us 00:08:51.190 99.99900% : 32215.287us 00:08:51.190 99.99990% : 32215.287us 00:08:51.190 99.99999% : 32215.287us 00:08:51.190 00:08:51.190 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:51.190 ================================================================================= 00:08:51.190 1.00000% : 9001.330us 00:08:51.190 10.00000% : 9633.002us 00:08:51.190 25.00000% : 10106.757us 00:08:51.190 50.00000% : 11001.626us 00:08:51.190 75.00000% : 15054.856us 00:08:51.190 90.00000% : 17897.382us 00:08:51.190 95.00000% : 19266.005us 00:08:51.190 98.00000% : 21792.694us 00:08:51.190 99.00000% : 24740.498us 00:08:51.190 99.50000% : 29478.040us 00:08:51.190 99.90000% : 30530.827us 00:08:51.190 99.99000% : 30741.385us 00:08:51.190 99.99900% : 30741.385us 00:08:51.190 99.99990% : 30741.385us 00:08:51.190 99.99999% : 30741.385us 00:08:51.190 00:08:51.190 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:51.190 ============================================================================== 00:08:51.190 Range in us Cumulative IO count 00:08:51.190 8369.658 - 8422.297: 0.0099% ( 1) 00:08:51.190 8474.937 - 8527.576: 0.0494% ( 4) 00:08:51.190 8527.576 - 8580.215: 0.1879% ( 14) 00:08:51.190 8580.215 - 8632.855: 0.2275% ( 4) 00:08:51.190 8632.855 - 8685.494: 0.3263% ( 10) 00:08:51.190 8685.494 - 8738.133: 0.4153% ( 9) 00:08:51.190 8738.133 - 8790.773: 0.5637% ( 15) 00:08:51.190 8790.773 - 8843.412: 0.9197% ( 36) 00:08:51.190 8843.412 - 8896.051: 1.2263% ( 31) 00:08:51.190 8896.051 - 8948.691: 1.4933% ( 27) 00:08:51.190 8948.691 - 9001.330: 1.7900% ( 30) 00:08:51.190 9001.330 - 9053.969: 2.2844% ( 50) 00:08:51.190 9053.969 - 9106.609: 2.9173% ( 64) 00:08:51.190 9106.609 - 9159.248: 3.8074% ( 90) 00:08:51.190 9159.248 - 9211.888: 4.3018% ( 50) 00:08:51.190 9211.888 - 9264.527: 4.7468% ( 45) 00:08:51.190 9264.527 - 9317.166: 5.1721% ( 43) 00:08:51.190 9317.166 - 9369.806: 5.8445% ( 68) 00:08:51.190 9369.806 - 9422.445: 6.6950% ( 86) 00:08:51.190 9422.445 - 9475.084: 7.3873% ( 70) 00:08:51.190 9475.084 - 9527.724: 8.6828% ( 131) 00:08:51.190 9527.724 - 9580.363: 9.9189% ( 125) 00:08:51.190 9580.363 - 9633.002: 11.5210% ( 162) 00:08:51.190 9633.002 - 9685.642: 13.1626% ( 166) 00:08:51.190 9685.642 - 9738.281: 15.2690% ( 213) 00:08:51.190 9738.281 - 9790.920: 16.4359% ( 118) 00:08:51.190 9790.920 - 9843.560: 17.7116% ( 129) 00:08:51.190 9843.560 - 9896.199: 18.9082% ( 121) 00:08:51.190 9896.199 - 9948.839: 20.3521% ( 146) 00:08:51.190 9948.839 - 10001.478: 22.2310% ( 190) 00:08:51.190 10001.478 - 10054.117: 24.1396% ( 193) 00:08:51.190 10054.117 - 10106.757: 26.3647% ( 225) 00:08:51.190 10106.757 - 10159.396: 28.7282% ( 239) 00:08:51.190 10159.396 - 10212.035: 31.0324% ( 233) 00:08:51.190 10212.035 - 10264.675: 33.2377% ( 223) 00:08:51.190 10264.675 - 10317.314: 35.0969% ( 188) 00:08:51.190 10317.314 - 10369.953: 36.7188% ( 164) 00:08:51.190 10369.953 - 10422.593: 38.8548% ( 216) 00:08:51.190 10422.593 - 10475.232: 40.8030% ( 197) 00:08:51.190 10475.232 - 10527.871: 42.2567% ( 147) 00:08:51.190 10527.871 - 10580.511: 43.5918% ( 135) 00:08:51.190 10580.511 - 10633.150: 44.7686% ( 119) 00:08:51.190 10633.150 - 10685.790: 45.9355% ( 118) 00:08:51.190 10685.790 - 10738.429: 47.2013% ( 128) 00:08:51.190 10738.429 - 10791.068: 48.0419% ( 85) 00:08:51.190 10791.068 - 10843.708: 48.8331% ( 80) 00:08:51.190 10843.708 - 10896.347: 49.6934% ( 87) 00:08:51.190 10896.347 - 10948.986: 50.4747% ( 79) 00:08:51.190 10948.986 - 11001.626: 51.2856% ( 82) 00:08:51.190 11001.626 - 11054.265: 52.0174% ( 74) 00:08:51.190 11054.265 - 11106.904: 52.6009% ( 59) 00:08:51.190 11106.904 - 11159.544: 53.2140% ( 62) 00:08:51.190 11159.544 - 11212.183: 54.0546% ( 85) 00:08:51.190 11212.183 - 11264.822: 54.6282% ( 58) 00:08:51.190 11264.822 - 11317.462: 55.2215% ( 60) 00:08:51.190 11317.462 - 11370.101: 55.5973% ( 38) 00:08:51.190 11370.101 - 11422.741: 56.0720% ( 48) 00:08:51.190 11422.741 - 11475.380: 56.4379% ( 37) 00:08:51.190 11475.380 - 11528.019: 56.8532% ( 42) 00:08:51.190 11528.019 - 11580.659: 57.2686% ( 42) 00:08:51.190 11580.659 - 11633.298: 57.9411% ( 68) 00:08:51.190 11633.298 - 11685.937: 58.5443% ( 61) 00:08:51.190 11685.937 - 11738.577: 59.1772% ( 64) 00:08:51.190 11738.577 - 11791.216: 59.5728% ( 40) 00:08:51.190 11791.216 - 11843.855: 59.9189% ( 35) 00:08:51.190 11843.855 - 11896.495: 60.3540% ( 44) 00:08:51.190 11896.495 - 11949.134: 60.6408% ( 29) 00:08:51.190 11949.134 - 12001.773: 60.9771% ( 34) 00:08:51.190 12001.773 - 12054.413: 61.2836% ( 31) 00:08:51.190 12054.413 - 12107.052: 61.6297% ( 35) 00:08:51.190 12107.052 - 12159.692: 62.0748% ( 45) 00:08:51.190 12159.692 - 12212.331: 62.3714% ( 30) 00:08:51.190 12212.331 - 12264.970: 62.5890% ( 22) 00:08:51.190 12264.970 - 12317.610: 63.0538% ( 47) 00:08:51.190 12317.610 - 12370.249: 63.4494% ( 40) 00:08:51.190 12370.249 - 12422.888: 63.8153% ( 37) 00:08:51.190 12422.888 - 12475.528: 64.1218% ( 31) 00:08:51.190 12475.528 - 12528.167: 64.3592% ( 24) 00:08:51.190 12528.167 - 12580.806: 64.6262% ( 27) 00:08:51.190 12580.806 - 12633.446: 64.8141% ( 19) 00:08:51.190 12633.446 - 12686.085: 64.9921% ( 18) 00:08:51.190 12686.085 - 12738.724: 65.1998% ( 21) 00:08:51.190 12738.724 - 12791.364: 65.3778% ( 18) 00:08:51.190 12791.364 - 12844.003: 65.6052% ( 23) 00:08:51.190 12844.003 - 12896.643: 65.7536% ( 15) 00:08:51.190 12896.643 - 12949.282: 66.1590% ( 41) 00:08:51.190 12949.282 - 13001.921: 66.4656% ( 31) 00:08:51.190 13001.921 - 13054.561: 66.7326% ( 27) 00:08:51.190 13054.561 - 13107.200: 66.9996% ( 27) 00:08:51.190 13107.200 - 13159.839: 67.3655% ( 37) 00:08:51.190 13159.839 - 13212.479: 67.6127% ( 25) 00:08:51.190 13212.479 - 13265.118: 67.8797% ( 27) 00:08:51.190 13265.118 - 13317.757: 68.2358% ( 36) 00:08:51.190 13317.757 - 13370.397: 68.6214% ( 39) 00:08:51.190 13370.397 - 13423.036: 68.8687% ( 25) 00:08:51.190 13423.036 - 13475.676: 69.1159% ( 25) 00:08:51.190 13475.676 - 13580.954: 69.5214% ( 41) 00:08:51.190 13580.954 - 13686.233: 70.0158% ( 50) 00:08:51.190 13686.233 - 13791.512: 70.4509% ( 44) 00:08:51.190 13791.512 - 13896.790: 70.8762% ( 43) 00:08:51.190 13896.790 - 14002.069: 71.6377% ( 77) 00:08:51.190 14002.069 - 14107.348: 72.1717% ( 54) 00:08:51.190 14107.348 - 14212.627: 72.5771% ( 41) 00:08:51.190 14212.627 - 14317.905: 73.0222% ( 45) 00:08:51.190 14317.905 - 14423.184: 73.2892% ( 27) 00:08:51.190 14423.184 - 14528.463: 73.5562% ( 27) 00:08:51.190 14528.463 - 14633.741: 73.8528% ( 30) 00:08:51.190 14633.741 - 14739.020: 74.1891% ( 34) 00:08:51.190 14739.020 - 14844.299: 74.6934% ( 51) 00:08:51.190 14844.299 - 14949.578: 75.1978% ( 51) 00:08:51.190 14949.578 - 15054.856: 75.5538% ( 36) 00:08:51.191 15054.856 - 15160.135: 76.0483% ( 50) 00:08:51.191 15160.135 - 15265.414: 76.5724% ( 53) 00:08:51.191 15265.414 - 15370.692: 76.9086% ( 34) 00:08:51.191 15370.692 - 15475.971: 77.4130% ( 51) 00:08:51.191 15475.971 - 15581.250: 78.0360% ( 63) 00:08:51.191 15581.250 - 15686.529: 78.8964% ( 87) 00:08:51.191 15686.529 - 15791.807: 79.3018% ( 41) 00:08:51.191 15791.807 - 15897.086: 79.7666% ( 47) 00:08:51.191 15897.086 - 16002.365: 80.1226% ( 36) 00:08:51.191 16002.365 - 16107.643: 80.4984% ( 38) 00:08:51.191 16107.643 - 16212.922: 80.7951% ( 30) 00:08:51.191 16212.922 - 16318.201: 81.2500% ( 46) 00:08:51.191 16318.201 - 16423.480: 81.7939% ( 55) 00:08:51.191 16423.480 - 16528.758: 82.4664% ( 68) 00:08:51.191 16528.758 - 16634.037: 83.0301% ( 57) 00:08:51.191 16634.037 - 16739.316: 83.7025% ( 68) 00:08:51.191 16739.316 - 16844.594: 84.4442% ( 75) 00:08:51.191 16844.594 - 16949.873: 85.0178% ( 58) 00:08:51.191 16949.873 - 17055.152: 85.4727% ( 46) 00:08:51.191 17055.152 - 17160.431: 85.8485% ( 38) 00:08:51.191 17160.431 - 17265.709: 86.1748% ( 33) 00:08:51.191 17265.709 - 17370.988: 86.6594% ( 49) 00:08:51.191 17370.988 - 17476.267: 87.1737% ( 52) 00:08:51.191 17476.267 - 17581.545: 87.7275% ( 56) 00:08:51.191 17581.545 - 17686.824: 88.3604% ( 64) 00:08:51.191 17686.824 - 17792.103: 88.8647% ( 51) 00:08:51.191 17792.103 - 17897.382: 89.4581% ( 60) 00:08:51.191 17897.382 - 18002.660: 89.8734% ( 42) 00:08:51.191 18002.660 - 18107.939: 90.3481% ( 48) 00:08:51.191 18107.939 - 18213.218: 90.8030% ( 46) 00:08:51.191 18213.218 - 18318.496: 91.4458% ( 65) 00:08:51.191 18318.496 - 18423.775: 91.9600% ( 52) 00:08:51.191 18423.775 - 18529.054: 92.3952% ( 44) 00:08:51.191 18529.054 - 18634.333: 92.8303% ( 44) 00:08:51.191 18634.333 - 18739.611: 93.3544% ( 53) 00:08:51.191 18739.611 - 18844.890: 93.8786% ( 53) 00:08:51.191 18844.890 - 18950.169: 94.4027% ( 53) 00:08:51.191 18950.169 - 19055.447: 94.9367% ( 54) 00:08:51.191 19055.447 - 19160.726: 95.3916% ( 46) 00:08:51.191 19160.726 - 19266.005: 95.7180% ( 33) 00:08:51.191 19266.005 - 19371.284: 96.0740% ( 36) 00:08:51.191 19371.284 - 19476.562: 96.3212% ( 25) 00:08:51.191 19476.562 - 19581.841: 96.5289% ( 21) 00:08:51.191 19581.841 - 19687.120: 96.6871% ( 16) 00:08:51.191 19687.120 - 19792.398: 96.9739% ( 29) 00:08:51.191 19792.398 - 19897.677: 97.1816% ( 21) 00:08:51.191 19897.677 - 20002.956: 97.3299% ( 15) 00:08:51.191 20002.956 - 20108.235: 97.4288% ( 10) 00:08:51.191 20108.235 - 20213.513: 97.5079% ( 8) 00:08:51.191 20213.513 - 20318.792: 97.6167% ( 11) 00:08:51.191 20318.792 - 20424.071: 97.7453% ( 13) 00:08:51.191 20424.071 - 20529.349: 97.8343% ( 9) 00:08:51.191 20529.349 - 20634.628: 97.9134% ( 8) 00:08:51.191 20634.628 - 20739.907: 97.9430% ( 3) 00:08:51.191 21055.743 - 21161.022: 97.9529% ( 1) 00:08:51.191 21161.022 - 21266.300: 98.0024% ( 5) 00:08:51.191 21266.300 - 21371.579: 98.0320% ( 3) 00:08:51.191 21371.579 - 21476.858: 98.0617% ( 3) 00:08:51.191 21476.858 - 21582.137: 98.0815% ( 2) 00:08:51.191 21582.137 - 21687.415: 98.1013% ( 2) 00:08:51.191 22003.251 - 22108.530: 98.1112% ( 1) 00:08:51.191 22213.809 - 22319.088: 98.1507% ( 4) 00:08:51.191 22319.088 - 22424.366: 98.1903% ( 4) 00:08:51.191 22424.366 - 22529.645: 98.2199% ( 3) 00:08:51.191 22529.645 - 22634.924: 98.2595% ( 4) 00:08:51.191 22634.924 - 22740.202: 98.2991% ( 4) 00:08:51.191 22740.202 - 22845.481: 98.3386% ( 4) 00:08:51.191 22845.481 - 22950.760: 98.3683% ( 3) 00:08:51.191 22950.760 - 23056.039: 98.3979% ( 3) 00:08:51.191 23056.039 - 23161.317: 98.4375% ( 4) 00:08:51.191 23161.317 - 23266.596: 98.4474% ( 1) 00:08:51.191 23266.596 - 23371.875: 98.5067% ( 6) 00:08:51.191 23371.875 - 23477.153: 98.5957% ( 9) 00:08:51.191 23477.153 - 23582.432: 98.6452% ( 5) 00:08:51.191 23582.432 - 23687.711: 98.7045% ( 6) 00:08:51.191 23687.711 - 23792.990: 98.7144% ( 1) 00:08:51.191 23792.990 - 23898.268: 98.7342% ( 2) 00:08:51.191 35163.091 - 35373.648: 98.7737% ( 4) 00:08:51.191 35373.648 - 35584.206: 98.8430% ( 7) 00:08:51.191 35584.206 - 35794.763: 98.9023% ( 6) 00:08:51.191 35794.763 - 36005.320: 98.9715% ( 7) 00:08:51.191 36005.320 - 36215.878: 99.0309% ( 6) 00:08:51.191 36215.878 - 36426.435: 99.1100% ( 8) 00:08:51.191 36426.435 - 36636.993: 99.1693% ( 6) 00:08:51.191 36636.993 - 36847.550: 99.2484% ( 8) 00:08:51.191 36847.550 - 37058.108: 99.3176% ( 7) 00:08:51.191 37058.108 - 37268.665: 99.3671% ( 5) 00:08:51.191 42743.158 - 42953.716: 99.3968% ( 3) 00:08:51.191 42953.716 - 43164.273: 99.4660% ( 7) 00:08:51.191 43164.273 - 43374.831: 99.5451% ( 8) 00:08:51.191 43374.831 - 43585.388: 99.6143% ( 7) 00:08:51.191 43585.388 - 43795.945: 99.6934% ( 8) 00:08:51.191 43795.945 - 44006.503: 99.7627% ( 7) 00:08:51.191 44006.503 - 44217.060: 99.8418% ( 8) 00:08:51.191 44217.060 - 44427.618: 99.9209% ( 8) 00:08:51.191 44427.618 - 44638.175: 99.9901% ( 7) 00:08:51.191 44638.175 - 44848.733: 100.0000% ( 1) 00:08:51.191 00:08:51.191 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:51.191 ============================================================================== 00:08:51.191 Range in us Cumulative IO count 00:08:51.191 8580.215 - 8632.855: 0.0297% ( 3) 00:08:51.191 8632.855 - 8685.494: 0.0692% ( 4) 00:08:51.191 8685.494 - 8738.133: 0.1187% ( 5) 00:08:51.191 8738.133 - 8790.773: 0.2176% ( 10) 00:08:51.191 8790.773 - 8843.412: 0.4450% ( 23) 00:08:51.191 8843.412 - 8896.051: 0.9197% ( 48) 00:08:51.191 8896.051 - 8948.691: 1.4636% ( 55) 00:08:51.191 8948.691 - 9001.330: 1.8295% ( 37) 00:08:51.191 9001.330 - 9053.969: 2.4624% ( 64) 00:08:51.191 9053.969 - 9106.609: 3.1547% ( 70) 00:08:51.191 9106.609 - 9159.248: 3.6294% ( 48) 00:08:51.191 9159.248 - 9211.888: 4.0447% ( 42) 00:08:51.191 9211.888 - 9264.527: 4.4106% ( 37) 00:08:51.191 9264.527 - 9317.166: 4.7172% ( 31) 00:08:51.191 9317.166 - 9369.806: 5.1424% ( 43) 00:08:51.191 9369.806 - 9422.445: 5.8445% ( 71) 00:08:51.191 9422.445 - 9475.084: 6.4775% ( 64) 00:08:51.191 9475.084 - 9527.724: 7.3378% ( 87) 00:08:51.191 9527.724 - 9580.363: 8.8509% ( 153) 00:08:51.191 9580.363 - 9633.002: 10.8287% ( 200) 00:08:51.191 9633.002 - 9685.642: 12.9153% ( 211) 00:08:51.191 9685.642 - 9738.281: 14.8932% ( 200) 00:08:51.191 9738.281 - 9790.920: 16.7128% ( 184) 00:08:51.191 9790.920 - 9843.560: 18.5819% ( 189) 00:08:51.191 9843.560 - 9896.199: 20.2532% ( 169) 00:08:51.191 9896.199 - 9948.839: 21.9244% ( 169) 00:08:51.191 9948.839 - 10001.478: 23.3782% ( 147) 00:08:51.191 10001.478 - 10054.117: 24.7725% ( 141) 00:08:51.191 10054.117 - 10106.757: 26.3944% ( 164) 00:08:51.191 10106.757 - 10159.396: 28.0657% ( 169) 00:08:51.191 10159.396 - 10212.035: 30.0633% ( 202) 00:08:51.191 10212.035 - 10264.675: 33.0301% ( 300) 00:08:51.191 10264.675 - 10317.314: 35.2749% ( 227) 00:08:51.191 10317.314 - 10369.953: 37.3121% ( 206) 00:08:51.191 10369.953 - 10422.593: 39.2207% ( 193) 00:08:51.191 10422.593 - 10475.232: 40.7832% ( 158) 00:08:51.191 10475.232 - 10527.871: 42.0589% ( 129) 00:08:51.191 10527.871 - 10580.511: 43.2654% ( 122) 00:08:51.191 10580.511 - 10633.150: 44.3631% ( 111) 00:08:51.191 10633.150 - 10685.790: 45.7081% ( 136) 00:08:51.191 10685.790 - 10738.429: 46.9244% ( 123) 00:08:51.191 10738.429 - 10791.068: 48.0419% ( 113) 00:08:51.191 10791.068 - 10843.708: 49.1199% ( 109) 00:08:51.191 10843.708 - 10896.347: 50.0099% ( 90) 00:08:51.191 10896.347 - 10948.986: 50.8307% ( 83) 00:08:51.191 10948.986 - 11001.626: 51.6416% ( 82) 00:08:51.191 11001.626 - 11054.265: 52.1262% ( 49) 00:08:51.191 11054.265 - 11106.904: 52.5415% ( 42) 00:08:51.191 11106.904 - 11159.544: 53.0657% ( 53) 00:08:51.191 11159.544 - 11212.183: 53.7579% ( 70) 00:08:51.191 11212.183 - 11264.822: 55.2809% ( 154) 00:08:51.191 11264.822 - 11317.462: 56.1610% ( 89) 00:08:51.191 11317.462 - 11370.101: 56.9620% ( 81) 00:08:51.191 11370.101 - 11422.741: 57.5554% ( 60) 00:08:51.191 11422.741 - 11475.380: 57.9213% ( 37) 00:08:51.191 11475.380 - 11528.019: 58.2773% ( 36) 00:08:51.191 11528.019 - 11580.659: 58.5146% ( 24) 00:08:51.191 11580.659 - 11633.298: 58.7322% ( 22) 00:08:51.191 11633.298 - 11685.937: 58.9498% ( 22) 00:08:51.191 11685.937 - 11738.577: 59.3256% ( 38) 00:08:51.191 11738.577 - 11791.216: 59.7112% ( 39) 00:08:51.191 11791.216 - 11843.855: 60.2057% ( 50) 00:08:51.191 11843.855 - 11896.495: 60.7002% ( 50) 00:08:51.191 11896.495 - 11949.134: 61.1946% ( 50) 00:08:51.191 11949.134 - 12001.773: 61.5506% ( 36) 00:08:51.191 12001.773 - 12054.413: 61.9165% ( 37) 00:08:51.191 12054.413 - 12107.052: 62.1835% ( 27) 00:08:51.191 12107.052 - 12159.692: 62.4703% ( 29) 00:08:51.191 12159.692 - 12212.331: 62.6681% ( 20) 00:08:51.191 12212.331 - 12264.970: 62.9055% ( 24) 00:08:51.191 12264.970 - 12317.610: 63.1032% ( 20) 00:08:51.191 12317.610 - 12370.249: 63.2615% ( 16) 00:08:51.191 12370.249 - 12422.888: 63.4988% ( 24) 00:08:51.191 12422.888 - 12475.528: 63.7559% ( 26) 00:08:51.191 12475.528 - 12528.167: 64.1317% ( 38) 00:08:51.191 12528.167 - 12580.806: 64.3592% ( 23) 00:08:51.191 12580.806 - 12633.446: 64.5471% ( 19) 00:08:51.191 12633.446 - 12686.085: 64.7350% ( 19) 00:08:51.191 12686.085 - 12738.724: 64.9031% ( 17) 00:08:51.191 12738.724 - 12791.364: 65.0613% ( 16) 00:08:51.191 12791.364 - 12844.003: 65.2789% ( 22) 00:08:51.191 12844.003 - 12896.643: 65.4668% ( 19) 00:08:51.191 12896.643 - 12949.282: 65.7338% ( 27) 00:08:51.191 12949.282 - 13001.921: 65.9118% ( 18) 00:08:51.191 13001.921 - 13054.561: 66.0206% ( 11) 00:08:51.192 13054.561 - 13107.200: 66.3271% ( 31) 00:08:51.192 13107.200 - 13159.839: 66.6634% ( 34) 00:08:51.192 13159.839 - 13212.479: 67.1578% ( 50) 00:08:51.192 13212.479 - 13265.118: 67.4347% ( 28) 00:08:51.192 13265.118 - 13317.757: 67.7710% ( 34) 00:08:51.192 13317.757 - 13370.397: 68.1171% ( 35) 00:08:51.192 13370.397 - 13423.036: 68.5621% ( 45) 00:08:51.192 13423.036 - 13475.676: 68.8687% ( 31) 00:08:51.192 13475.676 - 13580.954: 69.4324% ( 57) 00:08:51.192 13580.954 - 13686.233: 69.8675% ( 44) 00:08:51.192 13686.233 - 13791.512: 70.4411% ( 58) 00:08:51.192 13791.512 - 13896.790: 71.1333% ( 70) 00:08:51.192 13896.790 - 14002.069: 71.8552% ( 73) 00:08:51.192 14002.069 - 14107.348: 72.2706% ( 42) 00:08:51.192 14107.348 - 14212.627: 72.6266% ( 36) 00:08:51.192 14212.627 - 14317.905: 72.9331% ( 31) 00:08:51.192 14317.905 - 14423.184: 73.1903% ( 26) 00:08:51.192 14423.184 - 14528.463: 73.4968% ( 31) 00:08:51.192 14528.463 - 14633.741: 73.7935% ( 30) 00:08:51.192 14633.741 - 14739.020: 74.1001% ( 31) 00:08:51.192 14739.020 - 14844.299: 74.2979% ( 20) 00:08:51.192 14844.299 - 14949.578: 74.4165% ( 12) 00:08:51.192 14949.578 - 15054.856: 74.7627% ( 35) 00:08:51.192 15054.856 - 15160.135: 75.1780% ( 42) 00:08:51.192 15160.135 - 15265.414: 75.6428% ( 47) 00:08:51.192 15265.414 - 15370.692: 76.1076% ( 47) 00:08:51.192 15370.692 - 15475.971: 76.8097% ( 71) 00:08:51.192 15475.971 - 15581.250: 77.4130% ( 61) 00:08:51.192 15581.250 - 15686.529: 77.9074% ( 50) 00:08:51.192 15686.529 - 15791.807: 78.1843% ( 28) 00:08:51.192 15791.807 - 15897.086: 78.7579% ( 58) 00:08:51.192 15897.086 - 16002.365: 79.6183% ( 87) 00:08:51.192 16002.365 - 16107.643: 80.4885% ( 88) 00:08:51.192 16107.643 - 16212.922: 81.3588% ( 88) 00:08:51.192 16212.922 - 16318.201: 82.3378% ( 99) 00:08:51.192 16318.201 - 16423.480: 82.8916% ( 56) 00:08:51.192 16423.480 - 16528.758: 83.2575% ( 37) 00:08:51.192 16528.758 - 16634.037: 83.5938% ( 34) 00:08:51.192 16634.037 - 16739.316: 83.8706% ( 28) 00:08:51.192 16739.316 - 16844.594: 84.1772% ( 31) 00:08:51.192 16844.594 - 16949.873: 84.4541% ( 28) 00:08:51.192 16949.873 - 17055.152: 84.8892% ( 44) 00:08:51.192 17055.152 - 17160.431: 85.5024% ( 62) 00:08:51.192 17160.431 - 17265.709: 85.9177% ( 42) 00:08:51.192 17265.709 - 17370.988: 86.4023% ( 49) 00:08:51.192 17370.988 - 17476.267: 86.8473% ( 45) 00:08:51.192 17476.267 - 17581.545: 87.2132% ( 37) 00:08:51.192 17581.545 - 17686.824: 87.7077% ( 50) 00:08:51.192 17686.824 - 17792.103: 88.3109% ( 61) 00:08:51.192 17792.103 - 17897.382: 88.8746% ( 57) 00:08:51.192 17897.382 - 18002.660: 89.5570% ( 69) 00:08:51.192 18002.660 - 18107.939: 90.2591% ( 71) 00:08:51.192 18107.939 - 18213.218: 90.9810% ( 73) 00:08:51.192 18213.218 - 18318.496: 91.4953% ( 52) 00:08:51.192 18318.496 - 18423.775: 92.0985% ( 61) 00:08:51.192 18423.775 - 18529.054: 92.5138% ( 42) 00:08:51.192 18529.054 - 18634.333: 92.8797% ( 37) 00:08:51.192 18634.333 - 18739.611: 93.3742% ( 50) 00:08:51.192 18739.611 - 18844.890: 93.8588% ( 49) 00:08:51.192 18844.890 - 18950.169: 94.3829% ( 53) 00:08:51.192 18950.169 - 19055.447: 94.9169% ( 54) 00:08:51.192 19055.447 - 19160.726: 95.3224% ( 41) 00:08:51.192 19160.726 - 19266.005: 95.6883% ( 37) 00:08:51.192 19266.005 - 19371.284: 95.9751% ( 29) 00:08:51.192 19371.284 - 19476.562: 96.2025% ( 23) 00:08:51.192 19476.562 - 19581.841: 96.4102% ( 21) 00:08:51.192 19581.841 - 19687.120: 96.5981% ( 19) 00:08:51.192 19687.120 - 19792.398: 96.8058% ( 21) 00:08:51.192 19792.398 - 19897.677: 97.0134% ( 21) 00:08:51.192 19897.677 - 20002.956: 97.3002% ( 29) 00:08:51.192 20002.956 - 20108.235: 97.5079% ( 21) 00:08:51.192 20108.235 - 20213.513: 97.6464% ( 14) 00:08:51.192 20213.513 - 20318.792: 97.7255% ( 8) 00:08:51.192 20318.792 - 20424.071: 97.7453% ( 2) 00:08:51.192 20424.071 - 20529.349: 97.7848% ( 4) 00:08:51.192 20529.349 - 20634.628: 97.8145% ( 3) 00:08:51.192 20634.628 - 20739.907: 97.8540% ( 4) 00:08:51.192 20739.907 - 20845.186: 97.8936% ( 4) 00:08:51.192 20845.186 - 20950.464: 97.9331% ( 4) 00:08:51.192 20950.464 - 21055.743: 97.9727% ( 4) 00:08:51.192 21055.743 - 21161.022: 98.0123% ( 4) 00:08:51.192 21161.022 - 21266.300: 98.0419% ( 3) 00:08:51.192 21266.300 - 21371.579: 98.0815% ( 4) 00:08:51.192 21371.579 - 21476.858: 98.1013% ( 2) 00:08:51.192 22950.760 - 23056.039: 98.1210% ( 2) 00:08:51.192 23161.317 - 23266.596: 98.1309% ( 1) 00:08:51.192 23266.596 - 23371.875: 98.1606% ( 3) 00:08:51.192 23371.875 - 23477.153: 98.2002% ( 4) 00:08:51.192 23477.153 - 23582.432: 98.2199% ( 2) 00:08:51.192 23582.432 - 23687.711: 98.2496% ( 3) 00:08:51.192 23687.711 - 23792.990: 98.2793% ( 3) 00:08:51.192 23792.990 - 23898.268: 98.3089% ( 3) 00:08:51.192 23898.268 - 24003.547: 98.3485% ( 4) 00:08:51.192 24003.547 - 24108.826: 98.3979% ( 5) 00:08:51.192 24108.826 - 24214.104: 98.4771% ( 8) 00:08:51.192 24214.104 - 24319.383: 98.5364% ( 6) 00:08:51.192 24319.383 - 24424.662: 98.5957% ( 6) 00:08:51.192 24424.662 - 24529.941: 98.6452% ( 5) 00:08:51.192 24529.941 - 24635.219: 98.7045% ( 6) 00:08:51.192 24635.219 - 24740.498: 98.7342% ( 3) 00:08:51.192 33478.631 - 33689.189: 98.7836% ( 5) 00:08:51.192 33689.189 - 33899.746: 98.8528% ( 7) 00:08:51.192 33899.746 - 34110.304: 98.9320% ( 8) 00:08:51.192 34110.304 - 34320.861: 99.0111% ( 8) 00:08:51.192 34320.861 - 34531.418: 99.0902% ( 8) 00:08:51.192 34531.418 - 34741.976: 99.1594% ( 7) 00:08:51.192 34741.976 - 34952.533: 99.2484% ( 9) 00:08:51.192 34952.533 - 35163.091: 99.3275% ( 8) 00:08:51.192 35163.091 - 35373.648: 99.3671% ( 4) 00:08:51.192 41269.256 - 41479.814: 99.4363% ( 7) 00:08:51.192 41479.814 - 41690.371: 99.5055% ( 7) 00:08:51.192 41690.371 - 41900.929: 99.5748% ( 7) 00:08:51.192 41900.929 - 42111.486: 99.6539% ( 8) 00:08:51.192 42111.486 - 42322.043: 99.7330% ( 8) 00:08:51.192 42322.043 - 42532.601: 99.8121% ( 8) 00:08:51.192 42532.601 - 42743.158: 99.8912% ( 8) 00:08:51.192 42743.158 - 42953.716: 99.9703% ( 8) 00:08:51.192 42953.716 - 43164.273: 100.0000% ( 3) 00:08:51.192 00:08:51.192 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:51.192 ============================================================================== 00:08:51.192 Range in us Cumulative IO count 00:08:51.192 8632.855 - 8685.494: 0.0099% ( 1) 00:08:51.192 8685.494 - 8738.133: 0.0198% ( 1) 00:08:51.192 8790.773 - 8843.412: 0.1088% ( 9) 00:08:51.192 8843.412 - 8896.051: 0.3066% ( 20) 00:08:51.192 8896.051 - 8948.691: 0.6230% ( 32) 00:08:51.192 8948.691 - 9001.330: 1.1472% ( 53) 00:08:51.192 9001.330 - 9053.969: 1.7702% ( 63) 00:08:51.192 9053.969 - 9106.609: 2.5119% ( 75) 00:08:51.192 9106.609 - 9159.248: 3.2338% ( 73) 00:08:51.192 9159.248 - 9211.888: 3.9458% ( 72) 00:08:51.192 9211.888 - 9264.527: 4.8161% ( 88) 00:08:51.192 9264.527 - 9317.166: 5.2710% ( 46) 00:08:51.192 9317.166 - 9369.806: 5.6962% ( 43) 00:08:51.192 9369.806 - 9422.445: 6.3291% ( 64) 00:08:51.192 9422.445 - 9475.084: 6.8631% ( 54) 00:08:51.192 9475.084 - 9527.724: 7.3873% ( 53) 00:08:51.192 9527.724 - 9580.363: 8.1784% ( 80) 00:08:51.192 9580.363 - 9633.002: 9.0388% ( 87) 00:08:51.192 9633.002 - 9685.642: 10.4430% ( 142) 00:08:51.192 9685.642 - 9738.281: 12.1044% ( 168) 00:08:51.192 9738.281 - 9790.920: 13.9537% ( 187) 00:08:51.192 9790.920 - 9843.560: 16.1887% ( 226) 00:08:51.192 9843.560 - 9896.199: 18.0578% ( 189) 00:08:51.192 9896.199 - 9948.839: 20.4608% ( 243) 00:08:51.192 9948.839 - 10001.478: 22.6562% ( 222) 00:08:51.192 10001.478 - 10054.117: 24.5352% ( 190) 00:08:51.192 10054.117 - 10106.757: 26.2955% ( 178) 00:08:51.192 10106.757 - 10159.396: 28.2437% ( 197) 00:08:51.192 10159.396 - 10212.035: 30.4885% ( 227) 00:08:51.192 10212.035 - 10264.675: 32.4466% ( 198) 00:08:51.192 10264.675 - 10317.314: 34.8596% ( 244) 00:08:51.192 10317.314 - 10369.953: 37.4802% ( 265) 00:08:51.192 10369.953 - 10422.593: 39.1416% ( 168) 00:08:51.192 10422.593 - 10475.232: 40.9415% ( 182) 00:08:51.192 10475.232 - 10527.871: 42.7710% ( 185) 00:08:51.192 10527.871 - 10580.511: 44.6203% ( 187) 00:08:51.192 10580.511 - 10633.150: 45.6586% ( 105) 00:08:51.192 10633.150 - 10685.790: 46.6871% ( 104) 00:08:51.192 10685.790 - 10738.429: 47.4782% ( 80) 00:08:51.192 10738.429 - 10791.068: 48.2991% ( 83) 00:08:51.192 10791.068 - 10843.708: 48.9715% ( 68) 00:08:51.192 10843.708 - 10896.347: 49.6835% ( 72) 00:08:51.192 10896.347 - 10948.986: 50.4055% ( 73) 00:08:51.192 10948.986 - 11001.626: 51.0581% ( 66) 00:08:51.192 11001.626 - 11054.265: 51.7998% ( 75) 00:08:51.192 11054.265 - 11106.904: 52.5712% ( 78) 00:08:51.192 11106.904 - 11159.544: 53.1843% ( 62) 00:08:51.192 11159.544 - 11212.183: 53.8172% ( 64) 00:08:51.192 11212.183 - 11264.822: 54.4600% ( 65) 00:08:51.192 11264.822 - 11317.462: 54.9446% ( 49) 00:08:51.192 11317.462 - 11370.101: 55.3896% ( 45) 00:08:51.192 11370.101 - 11422.741: 55.9039% ( 52) 00:08:51.192 11422.741 - 11475.380: 56.4379% ( 54) 00:08:51.192 11475.380 - 11528.019: 56.8434% ( 41) 00:08:51.192 11528.019 - 11580.659: 57.1994% ( 36) 00:08:51.192 11580.659 - 11633.298: 57.7631% ( 57) 00:08:51.192 11633.298 - 11685.937: 58.2773% ( 52) 00:08:51.192 11685.937 - 11738.577: 58.9695% ( 70) 00:08:51.192 11738.577 - 11791.216: 59.6025% ( 64) 00:08:51.192 11791.216 - 11843.855: 60.1266% ( 53) 00:08:51.192 11843.855 - 11896.495: 60.6309% ( 51) 00:08:51.192 11896.495 - 11949.134: 61.1056% ( 48) 00:08:51.192 11949.134 - 12001.773: 61.6594% ( 56) 00:08:51.192 12001.773 - 12054.413: 62.2923% ( 64) 00:08:51.193 12054.413 - 12107.052: 62.7769% ( 49) 00:08:51.193 12107.052 - 12159.692: 63.2615% ( 49) 00:08:51.193 12159.692 - 12212.331: 63.8153% ( 56) 00:08:51.193 12212.331 - 12264.970: 64.3097% ( 50) 00:08:51.193 12264.970 - 12317.610: 64.6163% ( 31) 00:08:51.193 12317.610 - 12370.249: 64.8734% ( 26) 00:08:51.193 12370.249 - 12422.888: 65.0415% ( 17) 00:08:51.193 12422.888 - 12475.528: 65.1800% ( 14) 00:08:51.193 12475.528 - 12528.167: 65.2690% ( 9) 00:08:51.193 12528.167 - 12580.806: 65.3580% ( 9) 00:08:51.193 12580.806 - 12633.446: 65.4569% ( 10) 00:08:51.193 12633.446 - 12686.085: 65.5558% ( 10) 00:08:51.193 12686.085 - 12738.724: 65.6448% ( 9) 00:08:51.193 12738.724 - 12791.364: 65.8525% ( 21) 00:08:51.193 12791.364 - 12844.003: 65.9810% ( 13) 00:08:51.193 12844.003 - 12896.643: 66.0799% ( 10) 00:08:51.193 12896.643 - 12949.282: 66.2085% ( 13) 00:08:51.193 12949.282 - 13001.921: 66.4755% ( 27) 00:08:51.193 13001.921 - 13054.561: 66.7919% ( 32) 00:08:51.193 13054.561 - 13107.200: 66.9007% ( 11) 00:08:51.193 13107.200 - 13159.839: 67.0194% ( 12) 00:08:51.193 13159.839 - 13212.479: 67.1776% ( 16) 00:08:51.193 13212.479 - 13265.118: 67.4051% ( 23) 00:08:51.193 13265.118 - 13317.757: 67.7413% ( 34) 00:08:51.193 13317.757 - 13370.397: 67.9094% ( 17) 00:08:51.193 13370.397 - 13423.036: 68.0775% ( 17) 00:08:51.193 13423.036 - 13475.676: 68.2259% ( 15) 00:08:51.193 13475.676 - 13580.954: 69.0368% ( 82) 00:08:51.193 13580.954 - 13686.233: 69.6203% ( 59) 00:08:51.193 13686.233 - 13791.512: 70.2532% ( 64) 00:08:51.193 13791.512 - 13896.790: 70.8070% ( 56) 00:08:51.193 13896.790 - 14002.069: 71.2915% ( 49) 00:08:51.193 14002.069 - 14107.348: 71.7761% ( 49) 00:08:51.193 14107.348 - 14212.627: 72.2310% ( 46) 00:08:51.193 14212.627 - 14317.905: 72.7057% ( 48) 00:08:51.193 14317.905 - 14423.184: 72.9331% ( 23) 00:08:51.193 14423.184 - 14528.463: 73.1112% ( 18) 00:08:51.193 14528.463 - 14633.741: 73.4573% ( 35) 00:08:51.193 14633.741 - 14739.020: 73.8232% ( 37) 00:08:51.193 14739.020 - 14844.299: 74.2880% ( 47) 00:08:51.193 14844.299 - 14949.578: 74.6934% ( 41) 00:08:51.193 14949.578 - 15054.856: 74.9110% ( 22) 00:08:51.193 15054.856 - 15160.135: 75.0396% ( 13) 00:08:51.193 15160.135 - 15265.414: 75.2769% ( 24) 00:08:51.193 15265.414 - 15370.692: 75.5439% ( 27) 00:08:51.193 15370.692 - 15475.971: 76.1768% ( 64) 00:08:51.193 15475.971 - 15581.250: 76.7207% ( 55) 00:08:51.193 15581.250 - 15686.529: 77.2844% ( 57) 00:08:51.193 15686.529 - 15791.807: 77.7987% ( 52) 00:08:51.193 15791.807 - 15897.086: 78.2437% ( 45) 00:08:51.193 15897.086 - 16002.365: 78.5799% ( 34) 00:08:51.193 16002.365 - 16107.643: 79.2524% ( 68) 00:08:51.193 16107.643 - 16212.922: 79.7468% ( 50) 00:08:51.193 16212.922 - 16318.201: 80.3501% ( 61) 00:08:51.193 16318.201 - 16423.480: 81.1214% ( 78) 00:08:51.193 16423.480 - 16528.758: 82.0016% ( 89) 00:08:51.193 16528.758 - 16634.037: 82.7235% ( 73) 00:08:51.193 16634.037 - 16739.316: 83.3366% ( 62) 00:08:51.193 16739.316 - 16844.594: 83.9794% ( 65) 00:08:51.193 16844.594 - 16949.873: 84.5233% ( 55) 00:08:51.193 16949.873 - 17055.152: 85.0376% ( 52) 00:08:51.193 17055.152 - 17160.431: 85.4727% ( 44) 00:08:51.193 17160.431 - 17265.709: 85.8782% ( 41) 00:08:51.193 17265.709 - 17370.988: 86.2441% ( 37) 00:08:51.193 17370.988 - 17476.267: 86.6693% ( 43) 00:08:51.193 17476.267 - 17581.545: 87.2330% ( 57) 00:08:51.193 17581.545 - 17686.824: 87.7868% ( 56) 00:08:51.193 17686.824 - 17792.103: 88.1329% ( 35) 00:08:51.193 17792.103 - 17897.382: 88.5878% ( 46) 00:08:51.193 17897.382 - 18002.660: 89.2108% ( 63) 00:08:51.193 18002.660 - 18107.939: 90.1009% ( 90) 00:08:51.193 18107.939 - 18213.218: 91.0502% ( 96) 00:08:51.193 18213.218 - 18318.496: 92.0194% ( 98) 00:08:51.193 18318.496 - 18423.775: 93.2061% ( 120) 00:08:51.193 18423.775 - 18529.054: 93.9379% ( 74) 00:08:51.193 18529.054 - 18634.333: 94.4917% ( 56) 00:08:51.193 18634.333 - 18739.611: 95.0850% ( 60) 00:08:51.193 18739.611 - 18844.890: 95.4312% ( 35) 00:08:51.193 18844.890 - 18950.169: 95.7773% ( 35) 00:08:51.193 18950.169 - 19055.447: 96.0344% ( 26) 00:08:51.193 19055.447 - 19160.726: 96.2816% ( 25) 00:08:51.193 19160.726 - 19266.005: 96.5289% ( 25) 00:08:51.193 19266.005 - 19371.284: 96.7366% ( 21) 00:08:51.193 19371.284 - 19476.562: 96.9047% ( 17) 00:08:51.193 19476.562 - 19581.841: 97.0629% ( 16) 00:08:51.193 19581.841 - 19687.120: 97.1519% ( 9) 00:08:51.193 19687.120 - 19792.398: 97.1915% ( 4) 00:08:51.193 19792.398 - 19897.677: 97.2409% ( 5) 00:08:51.193 19897.677 - 20002.956: 97.3497% ( 11) 00:08:51.193 20002.956 - 20108.235: 97.4684% ( 12) 00:08:51.193 20108.235 - 20213.513: 97.5771% ( 11) 00:08:51.193 20213.513 - 20318.792: 97.6661% ( 9) 00:08:51.193 20318.792 - 20424.071: 97.7255% ( 6) 00:08:51.193 20424.071 - 20529.349: 97.7947% ( 7) 00:08:51.193 20529.349 - 20634.628: 97.8540% ( 6) 00:08:51.193 20634.628 - 20739.907: 97.8837% ( 3) 00:08:51.193 20739.907 - 20845.186: 97.9134% ( 3) 00:08:51.193 20845.186 - 20950.464: 97.9430% ( 3) 00:08:51.193 20950.464 - 21055.743: 97.9826% ( 4) 00:08:51.193 21055.743 - 21161.022: 98.0222% ( 4) 00:08:51.193 21161.022 - 21266.300: 98.0617% ( 4) 00:08:51.193 21266.300 - 21371.579: 98.1013% ( 4) 00:08:51.193 23056.039 - 23161.317: 98.1112% ( 1) 00:08:51.193 23161.317 - 23266.596: 98.1309% ( 2) 00:08:51.193 23266.596 - 23371.875: 98.1705% ( 4) 00:08:51.193 23371.875 - 23477.153: 98.1804% ( 1) 00:08:51.193 23477.153 - 23582.432: 98.2199% ( 4) 00:08:51.193 23582.432 - 23687.711: 98.2397% ( 2) 00:08:51.193 23687.711 - 23792.990: 98.2694% ( 3) 00:08:51.193 23792.990 - 23898.268: 98.2991% ( 3) 00:08:51.193 23898.268 - 24003.547: 98.4078% ( 11) 00:08:51.193 24003.547 - 24108.826: 98.4869% ( 8) 00:08:51.193 24108.826 - 24214.104: 98.6155% ( 13) 00:08:51.193 24214.104 - 24319.383: 98.7045% ( 9) 00:08:51.193 24319.383 - 24424.662: 98.7243% ( 2) 00:08:51.193 24424.662 - 24529.941: 98.7342% ( 1) 00:08:51.193 32636.402 - 32846.959: 98.7638% ( 3) 00:08:51.193 32846.959 - 33057.516: 98.8331% ( 7) 00:08:51.193 33057.516 - 33268.074: 98.9122% ( 8) 00:08:51.193 33268.074 - 33478.631: 98.9913% ( 8) 00:08:51.193 33478.631 - 33689.189: 99.0704% ( 8) 00:08:51.193 33689.189 - 33899.746: 99.1594% ( 9) 00:08:51.193 33899.746 - 34110.304: 99.2385% ( 8) 00:08:51.193 34110.304 - 34320.861: 99.3176% ( 8) 00:08:51.193 34320.861 - 34531.418: 99.3671% ( 5) 00:08:51.193 40427.027 - 40637.584: 99.4462% ( 8) 00:08:51.193 40637.584 - 40848.141: 99.5253% ( 8) 00:08:51.193 40848.141 - 41058.699: 99.5945% ( 7) 00:08:51.193 41058.699 - 41269.256: 99.6737% ( 8) 00:08:51.193 41269.256 - 41479.814: 99.7429% ( 7) 00:08:51.193 41479.814 - 41690.371: 99.8220% ( 8) 00:08:51.193 41690.371 - 41900.929: 99.9011% ( 8) 00:08:51.193 41900.929 - 42111.486: 99.9802% ( 8) 00:08:51.193 42111.486 - 42322.043: 100.0000% ( 2) 00:08:51.193 00:08:51.193 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:51.193 ============================================================================== 00:08:51.193 Range in us Cumulative IO count 00:08:51.193 8685.494 - 8738.133: 0.0593% ( 6) 00:08:51.193 8738.133 - 8790.773: 0.1286% ( 7) 00:08:51.193 8790.773 - 8843.412: 0.2769% ( 15) 00:08:51.193 8843.412 - 8896.051: 0.6329% ( 36) 00:08:51.193 8896.051 - 8948.691: 0.9197% ( 29) 00:08:51.193 8948.691 - 9001.330: 1.2757% ( 36) 00:08:51.193 9001.330 - 9053.969: 1.9284% ( 66) 00:08:51.193 9053.969 - 9106.609: 2.4525% ( 53) 00:08:51.193 9106.609 - 9159.248: 3.0360% ( 59) 00:08:51.193 9159.248 - 9211.888: 3.7579% ( 73) 00:08:51.193 9211.888 - 9264.527: 4.2326% ( 48) 00:08:51.193 9264.527 - 9317.166: 4.9446% ( 72) 00:08:51.193 9317.166 - 9369.806: 5.5578% ( 62) 00:08:51.193 9369.806 - 9422.445: 6.4577% ( 91) 00:08:51.193 9422.445 - 9475.084: 7.2488% ( 80) 00:08:51.193 9475.084 - 9527.724: 8.2575% ( 102) 00:08:51.193 9527.724 - 9580.363: 9.4442% ( 120) 00:08:51.193 9580.363 - 9633.002: 10.5914% ( 116) 00:08:51.193 9633.002 - 9685.642: 12.2824% ( 171) 00:08:51.193 9685.642 - 9738.281: 13.8944% ( 163) 00:08:51.193 9738.281 - 9790.920: 15.6250% ( 175) 00:08:51.193 9790.920 - 9843.560: 17.3853% ( 178) 00:08:51.193 9843.560 - 9896.199: 19.1555% ( 179) 00:08:51.193 9896.199 - 9948.839: 20.8564% ( 172) 00:08:51.193 9948.839 - 10001.478: 22.5178% ( 168) 00:08:51.193 10001.478 - 10054.117: 24.1990% ( 170) 00:08:51.193 10054.117 - 10106.757: 26.1076% ( 193) 00:08:51.193 10106.757 - 10159.396: 28.2338% ( 215) 00:08:51.193 10159.396 - 10212.035: 30.2413% ( 203) 00:08:51.193 10212.035 - 10264.675: 32.3477% ( 213) 00:08:51.193 10264.675 - 10317.314: 34.3157% ( 199) 00:08:51.193 10317.314 - 10369.953: 36.0957% ( 180) 00:08:51.193 10369.953 - 10422.593: 38.4691% ( 240) 00:08:51.193 10422.593 - 10475.232: 40.4470% ( 200) 00:08:51.193 10475.232 - 10527.871: 42.5534% ( 213) 00:08:51.193 10527.871 - 10580.511: 43.9280% ( 139) 00:08:51.193 10580.511 - 10633.150: 45.1147% ( 120) 00:08:51.193 10633.150 - 10685.790: 46.2520% ( 115) 00:08:51.193 10685.790 - 10738.429: 47.0926% ( 85) 00:08:51.193 10738.429 - 10791.068: 47.7749% ( 69) 00:08:51.193 10791.068 - 10843.708: 48.5661% ( 80) 00:08:51.193 10843.708 - 10896.347: 49.2089% ( 65) 00:08:51.193 10896.347 - 10948.986: 49.9110% ( 71) 00:08:51.193 10948.986 - 11001.626: 50.6428% ( 74) 00:08:51.193 11001.626 - 11054.265: 51.3054% ( 67) 00:08:51.193 11054.265 - 11106.904: 51.9383% ( 64) 00:08:51.193 11106.904 - 11159.544: 52.5811% ( 65) 00:08:51.193 11159.544 - 11212.183: 53.7381% ( 117) 00:08:51.194 11212.183 - 11264.822: 54.5194% ( 79) 00:08:51.194 11264.822 - 11317.462: 55.3699% ( 86) 00:08:51.194 11317.462 - 11370.101: 55.9335% ( 57) 00:08:51.194 11370.101 - 11422.741: 56.3489% ( 42) 00:08:51.194 11422.741 - 11475.380: 56.9521% ( 61) 00:08:51.194 11475.380 - 11528.019: 57.4367% ( 49) 00:08:51.194 11528.019 - 11580.659: 58.0202% ( 59) 00:08:51.194 11580.659 - 11633.298: 58.5542% ( 54) 00:08:51.194 11633.298 - 11685.937: 59.1673% ( 62) 00:08:51.194 11685.937 - 11738.577: 59.6321% ( 47) 00:08:51.194 11738.577 - 11791.216: 60.0574% ( 43) 00:08:51.194 11791.216 - 11843.855: 60.6210% ( 57) 00:08:51.194 11843.855 - 11896.495: 61.1650% ( 55) 00:08:51.194 11896.495 - 11949.134: 61.6495% ( 49) 00:08:51.194 11949.134 - 12001.773: 62.1341% ( 49) 00:08:51.194 12001.773 - 12054.413: 62.5000% ( 37) 00:08:51.194 12054.413 - 12107.052: 62.9549% ( 46) 00:08:51.194 12107.052 - 12159.692: 63.2615% ( 31) 00:08:51.194 12159.692 - 12212.331: 63.5186% ( 26) 00:08:51.194 12212.331 - 12264.970: 63.8845% ( 37) 00:08:51.194 12264.970 - 12317.610: 64.1021% ( 22) 00:08:51.194 12317.610 - 12370.249: 64.4086% ( 31) 00:08:51.194 12370.249 - 12422.888: 64.5273% ( 12) 00:08:51.194 12422.888 - 12475.528: 64.6064% ( 8) 00:08:51.194 12475.528 - 12528.167: 64.7745% ( 17) 00:08:51.194 12528.167 - 12580.806: 64.8635% ( 9) 00:08:51.194 12580.806 - 12633.446: 64.9921% ( 13) 00:08:51.194 12633.446 - 12686.085: 65.0910% ( 10) 00:08:51.194 12686.085 - 12738.724: 65.2987% ( 21) 00:08:51.194 12738.724 - 12791.364: 65.5360% ( 24) 00:08:51.194 12791.364 - 12844.003: 65.6942% ( 16) 00:08:51.194 12844.003 - 12896.643: 65.8722% ( 18) 00:08:51.194 12896.643 - 12949.282: 66.0601% ( 19) 00:08:51.194 12949.282 - 13001.921: 66.2381% ( 18) 00:08:51.194 13001.921 - 13054.561: 66.3964% ( 16) 00:08:51.194 13054.561 - 13107.200: 66.5941% ( 20) 00:08:51.194 13107.200 - 13159.839: 66.8513% ( 26) 00:08:51.194 13159.839 - 13212.479: 67.1974% ( 35) 00:08:51.194 13212.479 - 13265.118: 67.4644% ( 27) 00:08:51.194 13265.118 - 13317.757: 67.7314% ( 27) 00:08:51.194 13317.757 - 13370.397: 67.9193% ( 19) 00:08:51.194 13370.397 - 13423.036: 68.1962% ( 28) 00:08:51.194 13423.036 - 13475.676: 68.3841% ( 19) 00:08:51.194 13475.676 - 13580.954: 68.8489% ( 47) 00:08:51.194 13580.954 - 13686.233: 69.3631% ( 52) 00:08:51.194 13686.233 - 13791.512: 69.9565% ( 60) 00:08:51.194 13791.512 - 13896.790: 70.2729% ( 32) 00:08:51.194 13896.790 - 14002.069: 70.4707% ( 20) 00:08:51.194 14002.069 - 14107.348: 70.6586% ( 19) 00:08:51.194 14107.348 - 14212.627: 70.7971% ( 14) 00:08:51.194 14212.627 - 14317.905: 71.1036% ( 31) 00:08:51.194 14317.905 - 14423.184: 71.4794% ( 38) 00:08:51.194 14423.184 - 14528.463: 71.8651% ( 39) 00:08:51.194 14528.463 - 14633.741: 72.4486% ( 59) 00:08:51.194 14633.741 - 14739.020: 72.9529% ( 51) 00:08:51.194 14739.020 - 14844.299: 73.7045% ( 76) 00:08:51.194 14844.299 - 14949.578: 74.4561% ( 76) 00:08:51.194 14949.578 - 15054.856: 74.9802% ( 53) 00:08:51.194 15054.856 - 15160.135: 75.4549% ( 48) 00:08:51.194 15160.135 - 15265.414: 75.9691% ( 52) 00:08:51.194 15265.414 - 15370.692: 76.5823% ( 62) 00:08:51.194 15370.692 - 15475.971: 77.0570% ( 48) 00:08:51.194 15475.971 - 15581.250: 77.3833% ( 33) 00:08:51.194 15581.250 - 15686.529: 77.6404% ( 26) 00:08:51.194 15686.529 - 15791.807: 77.8975% ( 26) 00:08:51.194 15791.807 - 15897.086: 78.2733% ( 38) 00:08:51.194 15897.086 - 16002.365: 78.5997% ( 33) 00:08:51.194 16002.365 - 16107.643: 79.0447% ( 45) 00:08:51.194 16107.643 - 16212.922: 79.2919% ( 25) 00:08:51.194 16212.922 - 16318.201: 79.5589% ( 27) 00:08:51.194 16318.201 - 16423.480: 79.9446% ( 39) 00:08:51.194 16423.480 - 16528.758: 80.3797% ( 44) 00:08:51.194 16528.758 - 16634.037: 81.1709% ( 80) 00:08:51.194 16634.037 - 16739.316: 82.1400% ( 98) 00:08:51.194 16739.316 - 16844.594: 82.9312% ( 80) 00:08:51.194 16844.594 - 16949.873: 83.6432% ( 72) 00:08:51.194 16949.873 - 17055.152: 84.3750% ( 74) 00:08:51.194 17055.152 - 17160.431: 85.0771% ( 71) 00:08:51.194 17160.431 - 17265.709: 85.9672% ( 90) 00:08:51.194 17265.709 - 17370.988: 86.5407% ( 58) 00:08:51.194 17370.988 - 17476.267: 87.4604% ( 93) 00:08:51.194 17476.267 - 17581.545: 88.2120% ( 76) 00:08:51.194 17581.545 - 17686.824: 88.8845% ( 68) 00:08:51.194 17686.824 - 17792.103: 89.3295% ( 45) 00:08:51.194 17792.103 - 17897.382: 89.7449% ( 42) 00:08:51.194 17897.382 - 18002.660: 90.2195% ( 48) 00:08:51.194 18002.660 - 18107.939: 90.6151% ( 40) 00:08:51.194 18107.939 - 18213.218: 91.1096% ( 50) 00:08:51.194 18213.218 - 18318.496: 91.8216% ( 72) 00:08:51.194 18318.496 - 18423.775: 92.4941% ( 68) 00:08:51.194 18423.775 - 18529.054: 93.2654% ( 78) 00:08:51.194 18529.054 - 18634.333: 94.0071% ( 75) 00:08:51.194 18634.333 - 18739.611: 94.5411% ( 54) 00:08:51.194 18739.611 - 18844.890: 95.1246% ( 59) 00:08:51.194 18844.890 - 18950.169: 95.4411% ( 32) 00:08:51.194 18950.169 - 19055.447: 95.7278% ( 29) 00:08:51.194 19055.447 - 19160.726: 95.9850% ( 26) 00:08:51.194 19160.726 - 19266.005: 96.1926% ( 21) 00:08:51.194 19266.005 - 19371.284: 96.3805% ( 19) 00:08:51.194 19371.284 - 19476.562: 96.5091% ( 13) 00:08:51.194 19476.562 - 19581.841: 96.7069% ( 20) 00:08:51.194 19581.841 - 19687.120: 96.9244% ( 22) 00:08:51.194 19687.120 - 19792.398: 97.1321% ( 21) 00:08:51.194 19792.398 - 19897.677: 97.2211% ( 9) 00:08:51.194 19897.677 - 20002.956: 97.3299% ( 11) 00:08:51.194 20002.956 - 20108.235: 97.5178% ( 19) 00:08:51.194 20108.235 - 20213.513: 97.5969% ( 8) 00:08:51.194 20213.513 - 20318.792: 97.6859% ( 9) 00:08:51.194 20318.792 - 20424.071: 97.7749% ( 9) 00:08:51.194 20424.071 - 20529.349: 97.8837% ( 11) 00:08:51.194 20529.349 - 20634.628: 97.9628% ( 8) 00:08:51.194 20634.628 - 20739.907: 98.0123% ( 5) 00:08:51.194 20739.907 - 20845.186: 98.0518% ( 4) 00:08:51.194 20845.186 - 20950.464: 98.0815% ( 3) 00:08:51.194 20950.464 - 21055.743: 98.1013% ( 2) 00:08:51.194 23687.711 - 23792.990: 98.1210% ( 2) 00:08:51.194 23792.990 - 23898.268: 98.1507% ( 3) 00:08:51.194 23898.268 - 24003.547: 98.1606% ( 1) 00:08:51.194 24003.547 - 24108.826: 98.1903% ( 3) 00:08:51.194 24108.826 - 24214.104: 98.2199% ( 3) 00:08:51.194 24214.104 - 24319.383: 98.2694% ( 5) 00:08:51.194 24319.383 - 24424.662: 98.2892% ( 2) 00:08:51.194 24424.662 - 24529.941: 98.5562% ( 27) 00:08:51.194 24529.941 - 24635.219: 98.6254% ( 7) 00:08:51.194 24635.219 - 24740.498: 98.6452% ( 2) 00:08:51.194 24740.498 - 24845.777: 98.6650% ( 2) 00:08:51.194 24845.777 - 24951.055: 98.6946% ( 3) 00:08:51.194 24951.055 - 25056.334: 98.7144% ( 2) 00:08:51.194 25056.334 - 25161.613: 98.7342% ( 2) 00:08:51.194 30951.942 - 31162.500: 98.8034% ( 7) 00:08:51.194 31162.500 - 31373.057: 98.8726% ( 7) 00:08:51.194 31373.057 - 31583.614: 98.9616% ( 9) 00:08:51.194 31583.614 - 31794.172: 99.0407% ( 8) 00:08:51.194 31794.172 - 32004.729: 99.1001% ( 6) 00:08:51.194 32004.729 - 32215.287: 99.1792% ( 8) 00:08:51.194 32215.287 - 32425.844: 99.2583% ( 8) 00:08:51.194 32425.844 - 32636.402: 99.3275% ( 7) 00:08:51.194 32636.402 - 32846.959: 99.3671% ( 4) 00:08:51.194 38321.452 - 38532.010: 99.3770% ( 1) 00:08:51.194 38532.010 - 38742.567: 99.4462% ( 7) 00:08:51.194 38742.567 - 38953.124: 99.5253% ( 8) 00:08:51.194 38953.124 - 39163.682: 99.5847% ( 6) 00:08:51.194 39163.682 - 39374.239: 99.6539% ( 7) 00:08:51.194 39374.239 - 39584.797: 99.7330% ( 8) 00:08:51.194 39584.797 - 39795.354: 99.8022% ( 7) 00:08:51.194 39795.354 - 40005.912: 99.8714% ( 7) 00:08:51.194 40005.912 - 40216.469: 99.9407% ( 7) 00:08:51.194 40216.469 - 40427.027: 100.0000% ( 6) 00:08:51.194 00:08:51.194 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:51.194 ============================================================================== 00:08:51.194 Range in us Cumulative IO count 00:08:51.194 8738.133 - 8790.773: 0.0295% ( 3) 00:08:51.194 8790.773 - 8843.412: 0.1376% ( 11) 00:08:51.194 8843.412 - 8896.051: 0.2653% ( 13) 00:08:51.195 8896.051 - 8948.691: 0.6191% ( 36) 00:08:51.195 8948.691 - 9001.330: 1.3463% ( 74) 00:08:51.195 9001.330 - 9053.969: 1.7394% ( 40) 00:08:51.195 9053.969 - 9106.609: 2.4764% ( 75) 00:08:51.195 9106.609 - 9159.248: 3.0955% ( 63) 00:08:51.195 9159.248 - 9211.888: 3.7932% ( 71) 00:08:51.195 9211.888 - 9264.527: 4.4320% ( 65) 00:08:51.195 9264.527 - 9317.166: 5.1199% ( 70) 00:08:51.195 9317.166 - 9369.806: 5.8274% ( 72) 00:08:51.195 9369.806 - 9422.445: 6.6529% ( 84) 00:08:51.195 9422.445 - 9475.084: 7.5767% ( 94) 00:08:51.195 9475.084 - 9527.724: 8.8247% ( 127) 00:08:51.195 9527.724 - 9580.363: 9.9351% ( 113) 00:08:51.195 9580.363 - 9633.002: 11.0653% ( 115) 00:08:51.195 9633.002 - 9685.642: 12.1954% ( 115) 00:08:51.195 9685.642 - 9738.281: 13.7972% ( 163) 00:08:51.195 9738.281 - 9790.920: 15.4285% ( 166) 00:08:51.195 9790.920 - 9843.560: 17.0991% ( 170) 00:08:51.195 9843.560 - 9896.199: 18.8876% ( 182) 00:08:51.195 9896.199 - 9948.839: 20.7154% ( 186) 00:08:51.195 9948.839 - 10001.478: 23.0149% ( 234) 00:08:51.195 10001.478 - 10054.117: 24.9017% ( 192) 00:08:51.195 10054.117 - 10106.757: 27.1325% ( 227) 00:08:51.195 10106.757 - 10159.396: 29.2551% ( 216) 00:08:51.195 10159.396 - 10212.035: 31.4269% ( 221) 00:08:51.195 10212.035 - 10264.675: 33.5397% ( 215) 00:08:51.195 10264.675 - 10317.314: 35.5149% ( 201) 00:08:51.195 10317.314 - 10369.953: 37.0873% ( 160) 00:08:51.195 10369.953 - 10422.593: 38.8365% ( 178) 00:08:51.195 10422.593 - 10475.232: 40.3007% ( 149) 00:08:51.195 10475.232 - 10527.871: 41.7748% ( 150) 00:08:51.195 10527.871 - 10580.511: 43.3766% ( 163) 00:08:51.195 10580.511 - 10633.150: 44.5951% ( 124) 00:08:51.195 10633.150 - 10685.790: 45.6073% ( 103) 00:08:51.195 10685.790 - 10738.429: 46.3935% ( 80) 00:08:51.195 10738.429 - 10791.068: 47.4548% ( 108) 00:08:51.195 10791.068 - 10843.708: 48.1132% ( 67) 00:08:51.195 10843.708 - 10896.347: 48.6537% ( 55) 00:08:51.195 10896.347 - 10948.986: 49.1647% ( 52) 00:08:51.195 10948.986 - 11001.626: 49.9803% ( 83) 00:08:51.195 11001.626 - 11054.265: 50.5896% ( 62) 00:08:51.195 11054.265 - 11106.904: 51.2087% ( 63) 00:08:51.195 11106.904 - 11159.544: 51.7689% ( 57) 00:08:51.195 11159.544 - 11212.183: 52.5747% ( 82) 00:08:51.195 11212.183 - 11264.822: 53.4886% ( 93) 00:08:51.195 11264.822 - 11317.462: 54.1175% ( 64) 00:08:51.195 11317.462 - 11370.101: 54.7956% ( 69) 00:08:51.195 11370.101 - 11422.741: 55.4737% ( 69) 00:08:51.195 11422.741 - 11475.380: 56.3778% ( 92) 00:08:51.195 11475.380 - 11528.019: 57.1443% ( 78) 00:08:51.195 11528.019 - 11580.659: 57.9894% ( 86) 00:08:51.195 11580.659 - 11633.298: 58.5200% ( 54) 00:08:51.195 11633.298 - 11685.937: 58.9721% ( 46) 00:08:51.195 11685.937 - 11738.577: 59.4045% ( 44) 00:08:51.195 11738.577 - 11791.216: 59.8369% ( 44) 00:08:51.195 11791.216 - 11843.855: 60.2201% ( 39) 00:08:51.195 11843.855 - 11896.495: 60.5542% ( 34) 00:08:51.195 11896.495 - 11949.134: 60.8687% ( 32) 00:08:51.195 11949.134 - 12001.773: 61.3109% ( 45) 00:08:51.195 12001.773 - 12054.413: 61.6254% ( 32) 00:08:51.195 12054.413 - 12107.052: 61.9890% ( 37) 00:08:51.195 12107.052 - 12159.692: 62.3035% ( 32) 00:08:51.195 12159.692 - 12212.331: 62.5983% ( 30) 00:08:51.195 12212.331 - 12264.970: 62.7948% ( 20) 00:08:51.195 12264.970 - 12317.610: 62.9914% ( 20) 00:08:51.195 12317.610 - 12370.249: 63.2272% ( 24) 00:08:51.195 12370.249 - 12422.888: 63.4631% ( 24) 00:08:51.195 12422.888 - 12475.528: 63.6498% ( 19) 00:08:51.195 12475.528 - 12528.167: 63.9053% ( 26) 00:08:51.195 12528.167 - 12580.806: 64.1018% ( 20) 00:08:51.195 12580.806 - 12633.446: 64.5145% ( 42) 00:08:51.195 12633.446 - 12686.085: 64.7209% ( 21) 00:08:51.195 12686.085 - 12738.724: 65.0059% ( 29) 00:08:51.195 12738.724 - 12791.364: 65.3597% ( 36) 00:08:51.195 12791.364 - 12844.003: 65.6840% ( 33) 00:08:51.195 12844.003 - 12896.643: 66.0869% ( 41) 00:08:51.195 12896.643 - 12949.282: 66.4210% ( 34) 00:08:51.195 12949.282 - 13001.921: 66.7256% ( 31) 00:08:51.195 13001.921 - 13054.561: 67.0204% ( 30) 00:08:51.195 13054.561 - 13107.200: 67.2366% ( 22) 00:08:51.195 13107.200 - 13159.839: 67.4725% ( 24) 00:08:51.195 13159.839 - 13212.479: 67.8557% ( 39) 00:08:51.195 13212.479 - 13265.118: 68.1997% ( 35) 00:08:51.195 13265.118 - 13317.757: 68.5829% ( 39) 00:08:51.195 13317.757 - 13370.397: 68.8090% ( 23) 00:08:51.195 13370.397 - 13423.036: 69.1333% ( 33) 00:08:51.195 13423.036 - 13475.676: 69.3691% ( 24) 00:08:51.195 13475.676 - 13580.954: 69.8899% ( 53) 00:08:51.195 13580.954 - 13686.233: 70.0865% ( 20) 00:08:51.195 13686.233 - 13791.512: 70.2241% ( 14) 00:08:51.195 13791.512 - 13896.790: 70.3813% ( 16) 00:08:51.195 13896.790 - 14002.069: 70.6073% ( 23) 00:08:51.195 14002.069 - 14107.348: 70.8923% ( 29) 00:08:51.195 14107.348 - 14212.627: 71.2657% ( 38) 00:08:51.195 14212.627 - 14317.905: 71.5409% ( 28) 00:08:51.195 14317.905 - 14423.184: 71.8947% ( 36) 00:08:51.195 14423.184 - 14528.463: 72.3664% ( 48) 00:08:51.195 14528.463 - 14633.741: 72.7987% ( 44) 00:08:51.195 14633.741 - 14739.020: 73.4080% ( 62) 00:08:51.195 14739.020 - 14844.299: 73.8109% ( 41) 00:08:51.195 14844.299 - 14949.578: 74.2531% ( 45) 00:08:51.195 14949.578 - 15054.856: 74.7150% ( 47) 00:08:51.195 15054.856 - 15160.135: 75.4226% ( 72) 00:08:51.195 15160.135 - 15265.414: 76.2775% ( 87) 00:08:51.195 15265.414 - 15370.692: 76.6509% ( 38) 00:08:51.195 15370.692 - 15475.971: 77.1325% ( 49) 00:08:51.195 15475.971 - 15581.250: 77.4961% ( 37) 00:08:51.195 15581.250 - 15686.529: 77.8892% ( 40) 00:08:51.195 15686.529 - 15791.807: 78.2822% ( 40) 00:08:51.195 15791.807 - 15897.086: 78.8620% ( 59) 00:08:51.195 15897.086 - 16002.365: 79.2649% ( 41) 00:08:51.195 16002.365 - 16107.643: 79.6580% ( 40) 00:08:51.195 16107.643 - 16212.922: 79.9627% ( 31) 00:08:51.195 16212.922 - 16318.201: 80.2771% ( 32) 00:08:51.195 16318.201 - 16423.480: 80.6211% ( 35) 00:08:51.195 16423.480 - 16528.758: 80.9650% ( 35) 00:08:51.195 16528.758 - 16634.037: 81.3581% ( 40) 00:08:51.195 16634.037 - 16739.316: 81.8298% ( 48) 00:08:51.195 16739.316 - 16844.594: 82.8027% ( 99) 00:08:51.195 16844.594 - 16949.873: 83.6871% ( 90) 00:08:51.195 16949.873 - 17055.152: 84.3848% ( 71) 00:08:51.195 17055.152 - 17160.431: 85.1022% ( 73) 00:08:51.195 17160.431 - 17265.709: 85.8294% ( 74) 00:08:51.195 17265.709 - 17370.988: 86.8219% ( 101) 00:08:51.195 17370.988 - 17476.267: 87.7752% ( 97) 00:08:51.195 17476.267 - 17581.545: 88.4237% ( 66) 00:08:51.195 17581.545 - 17686.824: 89.1706% ( 76) 00:08:51.195 17686.824 - 17792.103: 89.6521% ( 49) 00:08:51.195 17792.103 - 17897.382: 90.1926% ( 55) 00:08:51.195 17897.382 - 18002.660: 90.8510% ( 67) 00:08:51.195 18002.660 - 18107.939: 91.2539% ( 41) 00:08:51.195 18107.939 - 18213.218: 91.5979% ( 35) 00:08:51.195 18213.218 - 18318.496: 91.9222% ( 33) 00:08:51.195 18318.496 - 18423.775: 92.1777% ( 26) 00:08:51.195 18423.775 - 18529.054: 92.6494% ( 48) 00:08:51.195 18529.054 - 18634.333: 93.3176% ( 68) 00:08:51.195 18634.333 - 18739.611: 93.5535% ( 24) 00:08:51.195 18739.611 - 18844.890: 93.7991% ( 25) 00:08:51.195 18844.890 - 18950.169: 94.1038% ( 31) 00:08:51.195 18950.169 - 19055.447: 94.4772% ( 38) 00:08:51.195 19055.447 - 19160.726: 94.7622% ( 29) 00:08:51.195 19160.726 - 19266.005: 95.0865% ( 33) 00:08:51.195 19266.005 - 19371.284: 95.3911% ( 31) 00:08:51.195 19371.284 - 19476.562: 95.7547% ( 37) 00:08:51.195 19476.562 - 19581.841: 96.0692% ( 32) 00:08:51.195 19581.841 - 19687.120: 96.2952% ( 23) 00:08:51.195 19687.120 - 19792.398: 96.5114% ( 22) 00:08:51.195 19792.398 - 19897.677: 96.6588% ( 15) 00:08:51.195 19897.677 - 20002.956: 96.7767% ( 12) 00:08:51.195 20002.956 - 20108.235: 96.8848% ( 11) 00:08:51.195 20108.235 - 20213.513: 97.0814% ( 20) 00:08:51.195 20213.513 - 20318.792: 97.2484% ( 17) 00:08:51.195 20318.792 - 20424.071: 97.3958% ( 15) 00:08:51.195 20424.071 - 20529.349: 97.5039% ( 11) 00:08:51.195 20529.349 - 20634.628: 97.6022% ( 10) 00:08:51.195 20634.628 - 20739.907: 97.6513% ( 5) 00:08:51.195 20739.907 - 20845.186: 97.6906% ( 4) 00:08:51.195 20845.186 - 20950.464: 97.7201% ( 3) 00:08:51.195 20950.464 - 21055.743: 97.7594% ( 4) 00:08:51.195 21055.743 - 21161.022: 97.8184% ( 6) 00:08:51.195 21161.022 - 21266.300: 97.9167% ( 10) 00:08:51.195 21266.300 - 21371.579: 97.9756% ( 6) 00:08:51.195 21371.579 - 21476.858: 98.0051% ( 3) 00:08:51.195 21476.858 - 21582.137: 98.0346% ( 3) 00:08:51.195 21582.137 - 21687.415: 98.0641% ( 3) 00:08:51.195 21687.415 - 21792.694: 98.0936% ( 3) 00:08:51.195 21792.694 - 21897.973: 98.1132% ( 2) 00:08:51.195 22740.202 - 22845.481: 98.1230% ( 1) 00:08:51.195 22845.481 - 22950.760: 98.1525% ( 3) 00:08:51.195 22950.760 - 23056.039: 98.1918% ( 4) 00:08:51.195 23056.039 - 23161.317: 98.2311% ( 4) 00:08:51.195 23161.317 - 23266.596: 98.2704% ( 4) 00:08:51.195 23266.596 - 23371.875: 98.2999% ( 3) 00:08:51.195 23371.875 - 23477.153: 98.3392% ( 4) 00:08:51.195 23477.153 - 23582.432: 98.3785% ( 4) 00:08:51.195 23582.432 - 23687.711: 98.4178% ( 4) 00:08:51.195 23687.711 - 23792.990: 98.4572% ( 4) 00:08:51.195 23792.990 - 23898.268: 98.4965% ( 4) 00:08:51.195 23898.268 - 24003.547: 98.5358% ( 4) 00:08:51.195 24003.547 - 24108.826: 98.6144% ( 8) 00:08:51.195 24108.826 - 24214.104: 98.6635% ( 5) 00:08:51.195 24214.104 - 24319.383: 98.7421% ( 8) 00:08:51.195 24319.383 - 24424.662: 98.8109% ( 7) 00:08:51.195 24424.662 - 24529.941: 99.0075% ( 20) 00:08:51.195 24529.941 - 24635.219: 99.2630% ( 26) 00:08:51.195 24635.219 - 24740.498: 99.2826% ( 2) 00:08:51.195 24740.498 - 24845.777: 99.3121% ( 3) 00:08:51.196 24845.777 - 24951.055: 99.3318% ( 2) 00:08:51.196 24951.055 - 25056.334: 99.3514% ( 2) 00:08:51.196 25056.334 - 25161.613: 99.3711% ( 2) 00:08:51.196 30320.270 - 30530.827: 99.4202% ( 5) 00:08:51.196 30530.827 - 30741.385: 99.4890% ( 7) 00:08:51.196 30741.385 - 30951.942: 99.5578% ( 7) 00:08:51.196 30951.942 - 31162.500: 99.6364% ( 8) 00:08:51.196 31162.500 - 31373.057: 99.7052% ( 7) 00:08:51.196 31373.057 - 31583.614: 99.7838% ( 8) 00:08:51.196 31583.614 - 31794.172: 99.8526% ( 7) 00:08:51.196 31794.172 - 32004.729: 99.9312% ( 8) 00:08:51.196 32004.729 - 32215.287: 100.0000% ( 7) 00:08:51.196 00:08:51.196 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:51.196 ============================================================================== 00:08:51.196 Range in us Cumulative IO count 00:08:51.196 8685.494 - 8738.133: 0.0491% ( 5) 00:08:51.196 8738.133 - 8790.773: 0.0983% ( 5) 00:08:51.196 8790.773 - 8843.412: 0.1965% ( 10) 00:08:51.196 8843.412 - 8896.051: 0.3636% ( 17) 00:08:51.196 8896.051 - 8948.691: 0.7174% ( 36) 00:08:51.196 8948.691 - 9001.330: 1.0613% ( 35) 00:08:51.196 9001.330 - 9053.969: 1.5527% ( 50) 00:08:51.196 9053.969 - 9106.609: 2.2897% ( 75) 00:08:51.196 9106.609 - 9159.248: 2.8597% ( 58) 00:08:51.196 9159.248 - 9211.888: 3.4198% ( 57) 00:08:51.196 9211.888 - 9264.527: 4.4418% ( 104) 00:08:51.196 9264.527 - 9317.166: 4.9725% ( 54) 00:08:51.196 9317.166 - 9369.806: 5.5621% ( 60) 00:08:51.196 9369.806 - 9422.445: 6.3778% ( 83) 00:08:51.196 9422.445 - 9475.084: 7.1639% ( 80) 00:08:51.196 9475.084 - 9527.724: 8.0680% ( 92) 00:08:51.196 9527.724 - 9580.363: 9.1588% ( 111) 00:08:51.196 9580.363 - 9633.002: 10.4363% ( 130) 00:08:51.196 9633.002 - 9685.642: 11.9693% ( 156) 00:08:51.196 9685.642 - 9738.281: 13.8758% ( 194) 00:08:51.196 9738.281 - 9790.920: 15.6643% ( 182) 00:08:51.196 9790.920 - 9843.560: 18.0621% ( 244) 00:08:51.196 9843.560 - 9896.199: 19.7032% ( 167) 00:08:51.196 9896.199 - 9948.839: 21.5507% ( 188) 00:08:51.196 9948.839 - 10001.478: 23.1329% ( 161) 00:08:51.196 10001.478 - 10054.117: 24.6757% ( 157) 00:08:51.196 10054.117 - 10106.757: 26.3070% ( 166) 00:08:51.196 10106.757 - 10159.396: 28.2724% ( 200) 00:08:51.196 10159.396 - 10212.035: 30.6997% ( 247) 00:08:51.196 10212.035 - 10264.675: 32.5373% ( 187) 00:08:51.196 10264.675 - 10317.314: 34.2964% ( 179) 00:08:51.196 10317.314 - 10369.953: 36.1930% ( 193) 00:08:51.196 10369.953 - 10422.593: 37.9717% ( 181) 00:08:51.196 10422.593 - 10475.232: 39.6030% ( 166) 00:08:51.196 10475.232 - 10527.871: 41.0377% ( 146) 00:08:51.196 10527.871 - 10580.511: 42.3840% ( 137) 00:08:51.196 10580.511 - 10633.150: 43.4061% ( 104) 00:08:51.196 10633.150 - 10685.790: 44.3494% ( 96) 00:08:51.196 10685.790 - 10738.429: 45.6073% ( 128) 00:08:51.196 10738.429 - 10791.068: 46.6293% ( 104) 00:08:51.196 10791.068 - 10843.708: 47.5825% ( 97) 00:08:51.196 10843.708 - 10896.347: 48.4178% ( 85) 00:08:51.196 10896.347 - 10948.986: 49.3907% ( 99) 00:08:51.196 10948.986 - 11001.626: 50.3046% ( 93) 00:08:51.196 11001.626 - 11054.265: 50.8648% ( 57) 00:08:51.196 11054.265 - 11106.904: 51.4347% ( 58) 00:08:51.196 11106.904 - 11159.544: 52.0244% ( 60) 00:08:51.196 11159.544 - 11212.183: 53.3314% ( 133) 00:08:51.196 11212.183 - 11264.822: 54.5303% ( 122) 00:08:51.196 11264.822 - 11317.462: 55.4638% ( 95) 00:08:51.196 11317.462 - 11370.101: 56.4072% ( 96) 00:08:51.196 11370.101 - 11422.741: 57.2917% ( 90) 00:08:51.196 11422.741 - 11475.380: 57.8911% ( 61) 00:08:51.196 11475.380 - 11528.019: 58.3039% ( 42) 00:08:51.196 11528.019 - 11580.659: 58.7854% ( 49) 00:08:51.196 11580.659 - 11633.298: 59.0802% ( 30) 00:08:51.196 11633.298 - 11685.937: 59.3062% ( 23) 00:08:51.196 11685.937 - 11738.577: 59.5028% ( 20) 00:08:51.196 11738.577 - 11791.216: 59.7779% ( 28) 00:08:51.196 11791.216 - 11843.855: 60.0531% ( 28) 00:08:51.196 11843.855 - 11896.495: 60.3774% ( 33) 00:08:51.196 11896.495 - 11949.134: 60.6918% ( 32) 00:08:51.196 11949.134 - 12001.773: 60.9572% ( 27) 00:08:51.196 12001.773 - 12054.413: 61.2913% ( 34) 00:08:51.196 12054.413 - 12107.052: 61.4682% ( 18) 00:08:51.196 12107.052 - 12159.692: 61.7138% ( 25) 00:08:51.196 12159.692 - 12212.331: 61.9202% ( 21) 00:08:51.196 12212.331 - 12264.970: 62.1659% ( 25) 00:08:51.196 12264.970 - 12317.610: 62.4410% ( 28) 00:08:51.196 12317.610 - 12370.249: 62.7358% ( 30) 00:08:51.196 12370.249 - 12422.888: 63.0012% ( 27) 00:08:51.196 12422.888 - 12475.528: 63.4336% ( 44) 00:08:51.196 12475.528 - 12528.167: 63.7775% ( 35) 00:08:51.196 12528.167 - 12580.806: 64.1608% ( 39) 00:08:51.196 12580.806 - 12633.446: 64.6619% ( 51) 00:08:51.196 12633.446 - 12686.085: 65.0452% ( 39) 00:08:51.196 12686.085 - 12738.724: 65.4088% ( 37) 00:08:51.196 12738.724 - 12791.364: 65.7233% ( 32) 00:08:51.196 12791.364 - 12844.003: 66.1557% ( 44) 00:08:51.196 12844.003 - 12896.643: 66.4308% ( 28) 00:08:51.196 12896.643 - 12949.282: 66.6568% ( 23) 00:08:51.196 12949.282 - 13001.921: 66.9418% ( 29) 00:08:51.196 13001.921 - 13054.561: 67.4332% ( 50) 00:08:51.196 13054.561 - 13107.200: 67.6395% ( 21) 00:08:51.196 13107.200 - 13159.839: 67.8263% ( 19) 00:08:51.196 13159.839 - 13212.479: 68.0326% ( 21) 00:08:51.196 13212.479 - 13265.118: 68.1899% ( 16) 00:08:51.196 13265.118 - 13317.757: 68.3864% ( 20) 00:08:51.196 13317.757 - 13370.397: 68.5633% ( 18) 00:08:51.196 13370.397 - 13423.036: 68.6714% ( 11) 00:08:51.196 13423.036 - 13475.676: 68.8384% ( 17) 00:08:51.196 13475.676 - 13580.954: 69.0645% ( 23) 00:08:51.196 13580.954 - 13686.233: 69.3396% ( 28) 00:08:51.196 13686.233 - 13791.512: 69.9096% ( 58) 00:08:51.196 13791.512 - 13896.790: 70.4796% ( 58) 00:08:51.196 13896.790 - 14002.069: 70.8530% ( 38) 00:08:51.196 14002.069 - 14107.348: 71.4721% ( 63) 00:08:51.196 14107.348 - 14212.627: 72.1305% ( 67) 00:08:51.196 14212.627 - 14317.905: 72.6120% ( 49) 00:08:51.196 14317.905 - 14423.184: 73.0542% ( 45) 00:08:51.196 14423.184 - 14528.463: 73.3097% ( 26) 00:08:51.196 14528.463 - 14633.741: 73.6242% ( 32) 00:08:51.196 14633.741 - 14739.020: 73.9387% ( 32) 00:08:51.196 14739.020 - 14844.299: 74.2433% ( 31) 00:08:51.196 14844.299 - 14949.578: 74.9116% ( 68) 00:08:51.196 14949.578 - 15054.856: 75.4127% ( 51) 00:08:51.196 15054.856 - 15160.135: 75.8156% ( 41) 00:08:51.196 15160.135 - 15265.414: 76.1301% ( 32) 00:08:51.196 15265.414 - 15370.692: 76.3365% ( 21) 00:08:51.196 15370.692 - 15475.971: 76.5527% ( 22) 00:08:51.196 15475.971 - 15581.250: 76.9654% ( 42) 00:08:51.196 15581.250 - 15686.529: 77.2799% ( 32) 00:08:51.196 15686.529 - 15791.807: 77.8892% ( 62) 00:08:51.196 15791.807 - 15897.086: 78.2724% ( 39) 00:08:51.196 15897.086 - 16002.365: 78.7638% ( 50) 00:08:51.196 16002.365 - 16107.643: 79.1372% ( 38) 00:08:51.196 16107.643 - 16212.922: 79.4811% ( 35) 00:08:51.196 16212.922 - 16318.201: 79.8840% ( 41) 00:08:51.196 16318.201 - 16423.480: 80.5425% ( 67) 00:08:51.196 16423.480 - 16528.758: 81.0436% ( 51) 00:08:51.196 16528.758 - 16634.037: 81.9674% ( 94) 00:08:51.196 16634.037 - 16739.316: 82.6160% ( 66) 00:08:51.196 16739.316 - 16844.594: 83.2449% ( 64) 00:08:51.196 16844.594 - 16949.873: 84.0212% ( 79) 00:08:51.196 16949.873 - 17055.152: 84.6698% ( 66) 00:08:51.196 17055.152 - 17160.431: 85.4953% ( 84) 00:08:51.196 17160.431 - 17265.709: 86.1340% ( 65) 00:08:51.196 17265.709 - 17370.988: 86.8612% ( 74) 00:08:51.196 17370.988 - 17476.267: 87.4410% ( 59) 00:08:51.196 17476.267 - 17581.545: 88.1093% ( 68) 00:08:51.196 17581.545 - 17686.824: 89.0920% ( 100) 00:08:51.196 17686.824 - 17792.103: 89.6521% ( 57) 00:08:51.196 17792.103 - 17897.382: 90.1631% ( 52) 00:08:51.196 17897.382 - 18002.660: 90.4383% ( 28) 00:08:51.196 18002.660 - 18107.939: 90.7921% ( 36) 00:08:51.196 18107.939 - 18213.218: 91.0770% ( 29) 00:08:51.196 18213.218 - 18318.496: 91.3031% ( 23) 00:08:51.196 18318.496 - 18423.775: 91.5782% ( 28) 00:08:51.196 18423.775 - 18529.054: 91.9713% ( 40) 00:08:51.196 18529.054 - 18634.333: 92.4921% ( 53) 00:08:51.196 18634.333 - 18739.611: 93.2685% ( 79) 00:08:51.196 18739.611 - 18844.890: 93.7795% ( 52) 00:08:51.196 18844.890 - 18950.169: 94.0939% ( 32) 00:08:51.196 18950.169 - 19055.447: 94.4969% ( 41) 00:08:51.196 19055.447 - 19160.726: 94.8998% ( 41) 00:08:51.196 19160.726 - 19266.005: 95.2241% ( 33) 00:08:51.196 19266.005 - 19371.284: 95.4796% ( 26) 00:08:51.196 19371.284 - 19476.562: 95.9316% ( 46) 00:08:51.196 19476.562 - 19581.841: 96.1478% ( 22) 00:08:51.196 19581.841 - 19687.120: 96.3345% ( 19) 00:08:51.196 19687.120 - 19792.398: 96.4917% ( 16) 00:08:51.196 19792.398 - 19897.677: 96.7178% ( 23) 00:08:51.196 19897.677 - 20002.956: 96.8947% ( 18) 00:08:51.196 20002.956 - 20108.235: 97.0126% ( 12) 00:08:51.196 20108.235 - 20213.513: 97.0912% ( 8) 00:08:51.196 20213.513 - 20318.792: 97.2976% ( 21) 00:08:51.196 20318.792 - 20424.071: 97.3369% ( 4) 00:08:51.196 20424.071 - 20529.349: 97.3762% ( 4) 00:08:51.196 20529.349 - 20634.628: 97.4057% ( 3) 00:08:51.196 20634.628 - 20739.907: 97.4351% ( 3) 00:08:51.196 20739.907 - 20845.186: 97.4843% ( 5) 00:08:51.196 20845.186 - 20950.464: 97.5432% ( 6) 00:08:51.196 20950.464 - 21055.743: 97.5825% ( 4) 00:08:51.196 21055.743 - 21161.022: 97.6219% ( 4) 00:08:51.196 21161.022 - 21266.300: 97.7005% ( 8) 00:08:51.196 21266.300 - 21371.579: 97.7496% ( 5) 00:08:51.196 21371.579 - 21476.858: 97.8282% ( 8) 00:08:51.196 21476.858 - 21582.137: 97.8872% ( 6) 00:08:51.196 21582.137 - 21687.415: 97.9560% ( 7) 00:08:51.196 21687.415 - 21792.694: 98.0149% ( 6) 00:08:51.196 21792.694 - 21897.973: 98.0837% ( 7) 00:08:51.196 21897.973 - 22003.251: 98.1623% ( 8) 00:08:51.197 22003.251 - 22108.530: 98.2508% ( 9) 00:08:51.197 22108.530 - 22213.809: 98.3589% ( 11) 00:08:51.197 22213.809 - 22319.088: 98.4866% ( 13) 00:08:51.197 22319.088 - 22424.366: 98.5554% ( 7) 00:08:51.197 22424.366 - 22529.645: 98.6242% ( 7) 00:08:51.197 22529.645 - 22634.924: 98.6832% ( 6) 00:08:51.197 22634.924 - 22740.202: 98.7127% ( 3) 00:08:51.197 22740.202 - 22845.481: 98.7421% ( 3) 00:08:51.197 23792.990 - 23898.268: 98.7520% ( 1) 00:08:51.197 24214.104 - 24319.383: 98.7814% ( 3) 00:08:51.197 24319.383 - 24424.662: 98.8109% ( 3) 00:08:51.197 24424.662 - 24529.941: 98.8502% ( 4) 00:08:51.197 24529.941 - 24635.219: 98.9092% ( 6) 00:08:51.197 24635.219 - 24740.498: 99.2531% ( 35) 00:08:51.197 24740.498 - 24845.777: 99.3318% ( 8) 00:08:51.197 24845.777 - 24951.055: 99.3514% ( 2) 00:08:51.197 24951.055 - 25056.334: 99.3711% ( 2) 00:08:51.197 28846.368 - 29056.925: 99.4202% ( 5) 00:08:51.197 29056.925 - 29267.483: 99.4988% ( 8) 00:08:51.197 29267.483 - 29478.040: 99.5774% ( 8) 00:08:51.197 29478.040 - 29688.598: 99.6561% ( 8) 00:08:51.197 29688.598 - 29899.155: 99.7248% ( 7) 00:08:51.197 29899.155 - 30109.712: 99.8035% ( 8) 00:08:51.197 30109.712 - 30320.270: 99.8821% ( 8) 00:08:51.197 30320.270 - 30530.827: 99.9607% ( 8) 00:08:51.197 30530.827 - 30741.385: 100.0000% ( 4) 00:08:51.197 00:08:51.197 17:43:18 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:51.197 00:08:51.197 real 0m2.748s 00:08:51.197 user 0m2.291s 00:08:51.197 sys 0m0.349s 00:08:51.197 17:43:18 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.197 17:43:18 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:51.197 ************************************ 00:08:51.197 END TEST nvme_perf 00:08:51.197 ************************************ 00:08:51.197 17:43:18 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:51.197 17:43:18 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:51.197 17:43:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.197 17:43:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.197 ************************************ 00:08:51.197 START TEST nvme_hello_world 00:08:51.197 ************************************ 00:08:51.197 17:43:18 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:51.456 Initializing NVMe Controllers 00:08:51.456 Attached to 0000:00:10.0 00:08:51.456 Namespace ID: 1 size: 6GB 00:08:51.456 Attached to 0000:00:11.0 00:08:51.456 Namespace ID: 1 size: 5GB 00:08:51.456 Attached to 0000:00:13.0 00:08:51.456 Namespace ID: 1 size: 1GB 00:08:51.456 Attached to 0000:00:12.0 00:08:51.456 Namespace ID: 1 size: 4GB 00:08:51.456 Namespace ID: 2 size: 4GB 00:08:51.456 Namespace ID: 3 size: 4GB 00:08:51.456 Initialization complete. 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 INFO: using host memory buffer for IO 00:08:51.456 Hello world! 00:08:51.456 00:08:51.456 real 0m0.317s 00:08:51.456 user 0m0.106s 00:08:51.456 sys 0m0.170s 00:08:51.456 17:43:18 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.456 ************************************ 00:08:51.456 17:43:18 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:51.456 END TEST nvme_hello_world 00:08:51.456 ************************************ 00:08:51.715 17:43:18 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:51.715 17:43:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.715 17:43:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.715 17:43:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.715 ************************************ 00:08:51.715 START TEST nvme_sgl 00:08:51.715 ************************************ 00:08:51.715 17:43:18 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:51.974 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:51.974 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:51.974 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:51.974 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:51.974 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:51.974 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:51.974 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:51.974 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:51.974 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:51.974 NVMe Readv/Writev Request test 00:08:51.974 Attached to 0000:00:10.0 00:08:51.974 Attached to 0000:00:11.0 00:08:51.974 Attached to 0000:00:13.0 00:08:51.974 Attached to 0000:00:12.0 00:08:51.974 0000:00:10.0: build_io_request_2 test passed 00:08:51.974 0000:00:10.0: build_io_request_4 test passed 00:08:51.974 0000:00:10.0: build_io_request_5 test passed 00:08:51.974 0000:00:10.0: build_io_request_6 test passed 00:08:51.974 0000:00:10.0: build_io_request_7 test passed 00:08:51.974 0000:00:10.0: build_io_request_10 test passed 00:08:51.974 0000:00:11.0: build_io_request_2 test passed 00:08:51.974 0000:00:11.0: build_io_request_4 test passed 00:08:51.974 0000:00:11.0: build_io_request_5 test passed 00:08:51.974 0000:00:11.0: build_io_request_6 test passed 00:08:51.974 0000:00:11.0: build_io_request_7 test passed 00:08:51.974 0000:00:11.0: build_io_request_10 test passed 00:08:51.974 Cleaning up... 00:08:51.974 00:08:51.974 real 0m0.400s 00:08:51.974 user 0m0.202s 00:08:51.974 sys 0m0.154s 00:08:51.974 17:43:19 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.974 ************************************ 00:08:51.974 END TEST nvme_sgl 00:08:51.974 17:43:19 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:51.974 ************************************ 00:08:51.974 17:43:19 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:51.974 17:43:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.974 17:43:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.974 17:43:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:51.974 ************************************ 00:08:51.974 START TEST nvme_e2edp 00:08:51.974 ************************************ 00:08:51.974 17:43:19 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:52.242 NVMe Write/Read with End-to-End data protection test 00:08:52.242 Attached to 0000:00:10.0 00:08:52.242 Attached to 0000:00:11.0 00:08:52.242 Attached to 0000:00:13.0 00:08:52.242 Attached to 0000:00:12.0 00:08:52.242 Cleaning up... 00:08:52.509 00:08:52.509 real 0m0.294s 00:08:52.509 user 0m0.109s 00:08:52.509 sys 0m0.145s 00:08:52.509 17:43:19 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.509 ************************************ 00:08:52.509 END TEST nvme_e2edp 00:08:52.509 17:43:19 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:52.509 ************************************ 00:08:52.509 17:43:19 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:52.509 17:43:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.509 17:43:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.509 17:43:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.509 ************************************ 00:08:52.509 START TEST nvme_reserve 00:08:52.509 ************************************ 00:08:52.509 17:43:19 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:52.767 ===================================================== 00:08:52.767 NVMe Controller at PCI bus 0, device 16, function 0 00:08:52.767 ===================================================== 00:08:52.767 Reservations: Not Supported 00:08:52.767 ===================================================== 00:08:52.767 NVMe Controller at PCI bus 0, device 17, function 0 00:08:52.767 ===================================================== 00:08:52.767 Reservations: Not Supported 00:08:52.767 ===================================================== 00:08:52.767 NVMe Controller at PCI bus 0, device 19, function 0 00:08:52.767 ===================================================== 00:08:52.767 Reservations: Not Supported 00:08:52.767 ===================================================== 00:08:52.767 NVMe Controller at PCI bus 0, device 18, function 0 00:08:52.767 ===================================================== 00:08:52.767 Reservations: Not Supported 00:08:52.767 Reservation test passed 00:08:52.767 00:08:52.767 real 0m0.323s 00:08:52.767 user 0m0.108s 00:08:52.767 sys 0m0.167s 00:08:52.767 17:43:19 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.767 17:43:19 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:52.767 ************************************ 00:08:52.767 END TEST nvme_reserve 00:08:52.767 ************************************ 00:08:52.767 17:43:19 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:52.767 17:43:19 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.767 17:43:19 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.767 17:43:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:52.767 ************************************ 00:08:52.767 START TEST nvme_err_injection 00:08:52.767 ************************************ 00:08:52.767 17:43:19 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:53.026 NVMe Error Injection test 00:08:53.026 Attached to 0000:00:10.0 00:08:53.026 Attached to 0000:00:11.0 00:08:53.026 Attached to 0000:00:13.0 00:08:53.026 Attached to 0000:00:12.0 00:08:53.026 0000:00:11.0: get features failed as expected 00:08:53.026 0000:00:13.0: get features failed as expected 00:08:53.026 0000:00:12.0: get features failed as expected 00:08:53.026 0000:00:10.0: get features failed as expected 00:08:53.026 0000:00:10.0: get features successfully as expected 00:08:53.026 0000:00:11.0: get features successfully as expected 00:08:53.026 0000:00:13.0: get features successfully as expected 00:08:53.026 0000:00:12.0: get features successfully as expected 00:08:53.026 0000:00:13.0: read failed as expected 00:08:53.026 0000:00:10.0: read failed as expected 00:08:53.026 0000:00:11.0: read failed as expected 00:08:53.026 0000:00:12.0: read failed as expected 00:08:53.026 0000:00:10.0: read successfully as expected 00:08:53.026 0000:00:11.0: read successfully as expected 00:08:53.026 0000:00:13.0: read successfully as expected 00:08:53.026 0000:00:12.0: read successfully as expected 00:08:53.026 Cleaning up... 00:08:53.285 00:08:53.285 real 0m0.327s 00:08:53.285 user 0m0.127s 00:08:53.285 sys 0m0.155s 00:08:53.285 17:43:20 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.285 ************************************ 00:08:53.285 17:43:20 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:53.285 END TEST nvme_err_injection 00:08:53.285 ************************************ 00:08:53.285 17:43:20 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:53.285 17:43:20 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:08:53.285 17:43:20 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.285 17:43:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.285 ************************************ 00:08:53.285 START TEST nvme_overhead 00:08:53.285 ************************************ 00:08:53.285 17:43:20 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:54.665 Initializing NVMe Controllers 00:08:54.665 Attached to 0000:00:10.0 00:08:54.665 Attached to 0000:00:11.0 00:08:54.665 Attached to 0000:00:13.0 00:08:54.665 Attached to 0000:00:12.0 00:08:54.665 Initialization complete. Launching workers. 00:08:54.665 submit (in ns) avg, min, max = 14119.0, 12134.1, 106712.4 00:08:54.665 complete (in ns) avg, min, max = 9558.9, 7923.7, 203966.3 00:08:54.665 00:08:54.665 Submit histogram 00:08:54.665 ================ 00:08:54.665 Range in us Cumulative Count 00:08:54.665 12.132 - 12.183: 0.0477% ( 4) 00:08:54.665 12.183 - 12.235: 0.1073% ( 5) 00:08:54.665 12.235 - 12.286: 0.2384% ( 11) 00:08:54.665 12.286 - 12.337: 0.5363% ( 25) 00:08:54.665 12.337 - 12.389: 0.7985% ( 22) 00:08:54.665 12.389 - 12.440: 1.4659% ( 56) 00:08:54.665 12.440 - 12.492: 2.5265% ( 89) 00:08:54.665 12.492 - 12.543: 3.9209% ( 117) 00:08:54.665 12.543 - 12.594: 5.9111% ( 167) 00:08:54.665 12.594 - 12.646: 8.4138% ( 210) 00:08:54.665 12.646 - 12.697: 11.4051% ( 251) 00:08:54.665 12.697 - 12.749: 14.9088% ( 294) 00:08:54.665 12.749 - 12.800: 18.6629% ( 315) 00:08:54.665 12.800 - 12.851: 22.5599% ( 327) 00:08:54.665 12.851 - 12.903: 26.4927% ( 330) 00:08:54.665 12.903 - 12.954: 30.4493% ( 332) 00:08:54.665 12.954 - 13.006: 34.6919% ( 356) 00:08:54.665 13.006 - 13.057: 38.6724% ( 334) 00:08:54.665 13.057 - 13.108: 42.3192% ( 306) 00:08:54.665 13.108 - 13.160: 46.0017% ( 309) 00:08:54.665 13.160 - 13.263: 52.7232% ( 564) 00:08:54.665 13.263 - 13.365: 58.8488% ( 514) 00:08:54.665 13.365 - 13.468: 64.2712% ( 455) 00:08:54.665 13.468 - 13.571: 68.6926% ( 371) 00:08:54.665 13.571 - 13.674: 72.4824% ( 318) 00:08:54.665 13.674 - 13.777: 75.7002% ( 270) 00:08:54.665 13.777 - 13.880: 78.1671% ( 207) 00:08:54.665 13.880 - 13.982: 79.6210% ( 122) 00:08:54.665 13.982 - 14.085: 80.6340% ( 85) 00:08:54.665 14.085 - 14.188: 81.3729% ( 62) 00:08:54.665 14.188 - 14.291: 81.8377% ( 39) 00:08:54.665 14.291 - 14.394: 82.0164% ( 15) 00:08:54.665 14.394 - 14.496: 82.2190% ( 17) 00:08:54.665 14.496 - 14.599: 82.3621% ( 12) 00:08:54.665 14.599 - 14.702: 82.5051% ( 12) 00:08:54.665 14.702 - 14.805: 82.5527% ( 4) 00:08:54.665 14.805 - 14.908: 82.6481% ( 8) 00:08:54.665 14.908 - 15.010: 82.6957% ( 4) 00:08:54.665 15.010 - 15.113: 82.7315% ( 3) 00:08:54.665 15.113 - 15.216: 82.7911% ( 5) 00:08:54.665 15.216 - 15.319: 82.8626% ( 6) 00:08:54.665 15.319 - 15.422: 82.8864% ( 2) 00:08:54.665 15.422 - 15.524: 83.0175% ( 11) 00:08:54.665 15.524 - 15.627: 83.1367% ( 10) 00:08:54.665 15.627 - 15.730: 83.4108% ( 23) 00:08:54.665 15.730 - 15.833: 83.7564% ( 29) 00:08:54.665 15.833 - 15.936: 84.1258% ( 31) 00:08:54.665 15.936 - 16.039: 84.5430% ( 35) 00:08:54.665 16.039 - 16.141: 85.0316% ( 41) 00:08:54.665 16.141 - 16.244: 85.6394% ( 51) 00:08:54.665 16.244 - 16.347: 86.3306% ( 58) 00:08:54.665 16.347 - 16.450: 86.9980% ( 56) 00:08:54.665 16.450 - 16.553: 87.7011% ( 59) 00:08:54.665 16.553 - 16.655: 88.4519% ( 63) 00:08:54.665 16.655 - 16.758: 89.1312% ( 57) 00:08:54.665 16.758 - 16.861: 89.9178% ( 66) 00:08:54.665 16.861 - 16.964: 90.4541% ( 45) 00:08:54.665 16.964 - 17.067: 90.9308% ( 40) 00:08:54.665 17.067 - 17.169: 91.5028% ( 48) 00:08:54.665 17.169 - 17.272: 91.9437% ( 37) 00:08:54.665 17.272 - 17.375: 92.4443% ( 42) 00:08:54.665 17.375 - 17.478: 92.7899% ( 29) 00:08:54.665 17.478 - 17.581: 93.1593% ( 31) 00:08:54.665 17.581 - 17.684: 93.5407% ( 32) 00:08:54.665 17.684 - 17.786: 93.9221% ( 32) 00:08:54.665 17.786 - 17.889: 94.1604% ( 20) 00:08:54.665 17.889 - 17.992: 94.3392% ( 15) 00:08:54.665 17.992 - 18.095: 94.5299% ( 16) 00:08:54.665 18.095 - 18.198: 94.7920% ( 22) 00:08:54.665 18.198 - 18.300: 94.9708% ( 15) 00:08:54.665 18.300 - 18.403: 95.1376% ( 14) 00:08:54.665 18.403 - 18.506: 95.3283% ( 16) 00:08:54.665 18.506 - 18.609: 95.5309% ( 17) 00:08:54.665 18.609 - 18.712: 95.6620% ( 11) 00:08:54.665 18.712 - 18.814: 95.7693% ( 9) 00:08:54.665 18.814 - 18.917: 95.8408% ( 6) 00:08:54.665 18.917 - 19.020: 95.9957% ( 13) 00:08:54.665 19.020 - 19.123: 96.1149% ( 10) 00:08:54.665 19.123 - 19.226: 96.2817% ( 14) 00:08:54.665 19.226 - 19.329: 96.3771% ( 8) 00:08:54.665 19.329 - 19.431: 96.5201% ( 12) 00:08:54.665 19.431 - 19.534: 96.6154% ( 8) 00:08:54.665 19.534 - 19.637: 96.7584% ( 12) 00:08:54.665 19.637 - 19.740: 96.8657% ( 9) 00:08:54.665 19.740 - 19.843: 96.9729% ( 9) 00:08:54.665 19.843 - 19.945: 97.0564% ( 7) 00:08:54.665 19.945 - 20.048: 97.1279% ( 6) 00:08:54.665 20.048 - 20.151: 97.2351% ( 9) 00:08:54.665 20.151 - 20.254: 97.3066% ( 6) 00:08:54.665 20.254 - 20.357: 97.3424% ( 3) 00:08:54.665 20.357 - 20.459: 97.4616% ( 10) 00:08:54.665 20.459 - 20.562: 97.5450% ( 7) 00:08:54.665 20.562 - 20.665: 97.6046% ( 5) 00:08:54.665 20.665 - 20.768: 97.6522% ( 4) 00:08:54.665 20.768 - 20.871: 97.6880% ( 3) 00:08:54.665 20.871 - 20.973: 97.7595% ( 6) 00:08:54.665 20.973 - 21.076: 97.8191% ( 5) 00:08:54.665 21.076 - 21.179: 97.8548% ( 3) 00:08:54.665 21.179 - 21.282: 97.9144% ( 5) 00:08:54.665 21.385 - 21.488: 97.9859% ( 6) 00:08:54.665 21.488 - 21.590: 97.9979% ( 1) 00:08:54.665 21.590 - 21.693: 98.0217% ( 2) 00:08:54.665 21.693 - 21.796: 98.0694% ( 4) 00:08:54.665 21.796 - 21.899: 98.0813% ( 1) 00:08:54.665 21.899 - 22.002: 98.0932% ( 1) 00:08:54.665 22.002 - 22.104: 98.1051% ( 1) 00:08:54.665 22.104 - 22.207: 98.1170% ( 1) 00:08:54.665 22.207 - 22.310: 98.1289% ( 1) 00:08:54.665 22.310 - 22.413: 98.1409% ( 1) 00:08:54.665 22.413 - 22.516: 98.1528% ( 1) 00:08:54.665 22.516 - 22.618: 98.1766% ( 2) 00:08:54.665 22.618 - 22.721: 98.1885% ( 1) 00:08:54.665 22.721 - 22.824: 98.2124% ( 2) 00:08:54.665 22.824 - 22.927: 98.2243% ( 1) 00:08:54.665 22.927 - 23.030: 98.2362% ( 1) 00:08:54.665 23.030 - 23.133: 98.2600% ( 2) 00:08:54.665 23.441 - 23.544: 98.2720% ( 1) 00:08:54.665 23.544 - 23.647: 98.2839% ( 1) 00:08:54.665 24.058 - 24.161: 98.3435% ( 5) 00:08:54.665 24.161 - 24.263: 98.3554% ( 1) 00:08:54.665 24.263 - 24.366: 98.3792% ( 2) 00:08:54.665 24.366 - 24.469: 98.3911% ( 1) 00:08:54.665 24.469 - 24.572: 98.4031% ( 1) 00:08:54.665 24.572 - 24.675: 98.4150% ( 1) 00:08:54.665 24.675 - 24.778: 98.4269% ( 1) 00:08:54.665 24.778 - 24.880: 98.4388% ( 1) 00:08:54.665 24.880 - 24.983: 98.4746% ( 3) 00:08:54.665 24.983 - 25.086: 98.4865% ( 1) 00:08:54.665 25.086 - 25.189: 98.4984% ( 1) 00:08:54.665 25.189 - 25.292: 98.5222% ( 2) 00:08:54.665 25.292 - 25.394: 98.5699% ( 4) 00:08:54.665 25.394 - 25.497: 98.6295% ( 5) 00:08:54.665 25.497 - 25.600: 98.6772% ( 4) 00:08:54.665 25.600 - 25.703: 98.7367% ( 5) 00:08:54.665 25.703 - 25.806: 98.7844% ( 4) 00:08:54.665 25.806 - 25.908: 98.8082% ( 2) 00:08:54.665 25.908 - 26.011: 98.8917% ( 7) 00:08:54.665 26.011 - 26.114: 98.9274% ( 3) 00:08:54.665 26.114 - 26.217: 98.9751% ( 4) 00:08:54.665 26.217 - 26.320: 98.9989% ( 2) 00:08:54.665 26.320 - 26.525: 99.0108% ( 1) 00:08:54.665 26.525 - 26.731: 99.0228% ( 1) 00:08:54.665 26.731 - 26.937: 99.0466% ( 2) 00:08:54.665 26.937 - 27.142: 99.0585% ( 1) 00:08:54.665 27.142 - 27.348: 99.0704% ( 1) 00:08:54.665 27.759 - 27.965: 99.1419% ( 6) 00:08:54.665 27.965 - 28.170: 99.1658% ( 2) 00:08:54.665 28.170 - 28.376: 99.1896% ( 2) 00:08:54.665 28.376 - 28.582: 99.2134% ( 2) 00:08:54.665 28.582 - 28.787: 99.2373% ( 2) 00:08:54.665 28.787 - 28.993: 99.2611% ( 2) 00:08:54.665 28.993 - 29.198: 99.2969% ( 3) 00:08:54.665 29.198 - 29.404: 99.3326% ( 3) 00:08:54.665 29.404 - 29.610: 99.3922% ( 5) 00:08:54.665 29.610 - 29.815: 99.4637% ( 6) 00:08:54.665 29.815 - 30.021: 99.5352% ( 6) 00:08:54.665 30.021 - 30.227: 99.5471% ( 1) 00:08:54.665 30.227 - 30.432: 99.5948% ( 4) 00:08:54.665 30.432 - 30.638: 99.6067% ( 1) 00:08:54.665 30.638 - 30.843: 99.6425% ( 3) 00:08:54.665 30.843 - 31.049: 99.6663% ( 2) 00:08:54.665 31.049 - 31.255: 99.6901% ( 2) 00:08:54.665 31.255 - 31.460: 99.7021% ( 1) 00:08:54.665 31.666 - 31.871: 99.7140% ( 1) 00:08:54.665 32.077 - 32.283: 99.7259% ( 1) 00:08:54.665 32.488 - 32.694: 99.7497% ( 2) 00:08:54.666 32.694 - 32.900: 99.7616% ( 1) 00:08:54.666 33.928 - 34.133: 99.7736% ( 1) 00:08:54.666 34.545 - 34.750: 99.7855% ( 1) 00:08:54.666 35.161 - 35.367: 99.7974% ( 1) 00:08:54.666 35.367 - 35.573: 99.8093% ( 1) 00:08:54.666 35.573 - 35.778: 99.8212% ( 1) 00:08:54.666 36.190 - 36.395: 99.8332% ( 1) 00:08:54.666 36.806 - 37.012: 99.8570% ( 2) 00:08:54.666 39.480 - 39.685: 99.8689% ( 1) 00:08:54.666 40.919 - 41.124: 99.8808% ( 1) 00:08:54.666 41.330 - 41.536: 99.8927% ( 1) 00:08:54.666 45.031 - 45.237: 99.9047% ( 1) 00:08:54.666 47.088 - 47.293: 99.9166% ( 1) 00:08:54.666 50.789 - 50.994: 99.9285% ( 1) 00:08:54.666 58.808 - 59.219: 99.9404% ( 1) 00:08:54.666 59.631 - 60.042: 99.9523% ( 1) 00:08:54.666 75.669 - 76.080: 99.9642% ( 1) 00:08:54.666 101.166 - 101.578: 99.9762% ( 1) 00:08:54.666 106.101 - 106.924: 100.0000% ( 2) 00:08:54.666 00:08:54.666 Complete histogram 00:08:54.666 ================== 00:08:54.666 Range in us Cumulative Count 00:08:54.666 7.916 - 7.968: 0.1192% ( 10) 00:08:54.666 7.968 - 8.019: 1.1918% ( 90) 00:08:54.666 8.019 - 8.071: 3.3607% ( 182) 00:08:54.666 8.071 - 8.122: 6.7453% ( 284) 00:08:54.666 8.122 - 8.173: 10.9522% ( 353) 00:08:54.666 8.173 - 8.225: 16.7203% ( 484) 00:08:54.666 8.225 - 8.276: 22.8936% ( 518) 00:08:54.666 8.276 - 8.328: 27.4699% ( 384) 00:08:54.666 8.328 - 8.379: 30.4970% ( 254) 00:08:54.666 8.379 - 8.431: 32.6779% ( 183) 00:08:54.666 8.431 - 8.482: 34.9184% ( 188) 00:08:54.666 8.482 - 8.533: 37.5521% ( 221) 00:08:54.666 8.533 - 8.585: 40.2217% ( 224) 00:08:54.666 8.585 - 8.636: 43.5109% ( 276) 00:08:54.666 8.636 - 8.688: 47.3722% ( 324) 00:08:54.666 8.688 - 8.739: 51.6267% ( 357) 00:08:54.666 8.739 - 8.790: 55.2854% ( 307) 00:08:54.666 8.790 - 8.842: 58.0741% ( 234) 00:08:54.666 8.842 - 8.893: 60.4338% ( 198) 00:08:54.666 8.893 - 8.945: 62.6862% ( 189) 00:08:54.666 8.945 - 8.996: 65.2604% ( 216) 00:08:54.666 8.996 - 9.047: 67.2864% ( 170) 00:08:54.666 9.047 - 9.099: 69.1813% ( 159) 00:08:54.666 9.099 - 9.150: 70.8497% ( 140) 00:08:54.666 9.150 - 9.202: 72.5301% ( 141) 00:08:54.666 9.202 - 9.253: 74.0079% ( 124) 00:08:54.666 9.253 - 9.304: 75.4856% ( 124) 00:08:54.666 9.304 - 9.356: 76.7489% ( 106) 00:08:54.666 9.356 - 9.407: 78.0002% ( 105) 00:08:54.666 9.407 - 9.459: 79.0371% ( 87) 00:08:54.666 9.459 - 9.510: 80.0262% ( 83) 00:08:54.666 9.510 - 9.561: 80.8366% ( 68) 00:08:54.666 9.561 - 9.613: 81.5517% ( 60) 00:08:54.666 9.613 - 9.664: 82.1237% ( 48) 00:08:54.666 9.664 - 9.716: 82.5408% ( 35) 00:08:54.666 9.716 - 9.767: 82.8030% ( 22) 00:08:54.666 9.767 - 9.818: 83.0890% ( 24) 00:08:54.666 9.818 - 9.870: 83.3870% ( 25) 00:08:54.666 9.870 - 9.921: 83.5061% ( 10) 00:08:54.666 9.921 - 9.973: 83.6730% ( 14) 00:08:54.666 9.973 - 10.024: 83.7922% ( 10) 00:08:54.666 10.024 - 10.076: 83.8398% ( 4) 00:08:54.666 10.076 - 10.127: 83.9471% ( 9) 00:08:54.666 10.127 - 10.178: 84.0067% ( 5) 00:08:54.666 10.178 - 10.230: 84.0424% ( 3) 00:08:54.666 10.230 - 10.281: 84.0901% ( 4) 00:08:54.666 10.281 - 10.333: 84.1258% ( 3) 00:08:54.666 10.333 - 10.384: 84.2093% ( 7) 00:08:54.666 10.384 - 10.435: 84.2450% ( 3) 00:08:54.666 10.435 - 10.487: 84.3046% ( 5) 00:08:54.666 10.487 - 10.538: 84.3642% ( 5) 00:08:54.666 10.538 - 10.590: 84.4238% ( 5) 00:08:54.666 10.590 - 10.641: 84.5072% ( 7) 00:08:54.666 10.641 - 10.692: 84.5787% ( 6) 00:08:54.666 10.692 - 10.744: 84.5906% ( 1) 00:08:54.666 10.744 - 10.795: 84.6621% ( 6) 00:08:54.666 10.795 - 10.847: 84.6741% ( 1) 00:08:54.666 10.847 - 10.898: 84.6979% ( 2) 00:08:54.666 10.898 - 10.949: 84.7813% ( 7) 00:08:54.666 10.949 - 11.001: 84.8290% ( 4) 00:08:54.666 11.001 - 11.052: 84.9005% ( 6) 00:08:54.666 11.052 - 11.104: 84.9243% ( 2) 00:08:54.666 11.104 - 11.155: 84.9720% ( 4) 00:08:54.666 11.155 - 11.206: 85.0435% ( 6) 00:08:54.666 11.206 - 11.258: 85.0793% ( 3) 00:08:54.666 11.258 - 11.309: 85.1508% ( 6) 00:08:54.666 11.309 - 11.361: 85.1865% ( 3) 00:08:54.666 11.361 - 11.412: 85.2580% ( 6) 00:08:54.666 11.412 - 11.463: 85.3057% ( 4) 00:08:54.666 11.463 - 11.515: 85.3534% ( 4) 00:08:54.666 11.515 - 11.566: 85.4129% ( 5) 00:08:54.666 11.566 - 11.618: 85.4487% ( 3) 00:08:54.666 11.618 - 11.669: 85.5679% ( 10) 00:08:54.666 11.669 - 11.720: 85.6155% ( 4) 00:08:54.666 11.720 - 11.772: 85.6870% ( 6) 00:08:54.666 11.772 - 11.823: 85.7824% ( 8) 00:08:54.666 11.823 - 11.875: 85.8539% ( 6) 00:08:54.666 11.875 - 11.926: 85.9373% ( 7) 00:08:54.666 11.926 - 11.978: 86.1161% ( 15) 00:08:54.666 11.978 - 12.029: 86.2353% ( 10) 00:08:54.666 12.029 - 12.080: 86.3902% ( 13) 00:08:54.666 12.080 - 12.132: 86.5094% ( 10) 00:08:54.666 12.132 - 12.183: 86.6643% ( 13) 00:08:54.666 12.183 - 12.235: 86.9741% ( 26) 00:08:54.666 12.235 - 12.286: 87.2602% ( 24) 00:08:54.666 12.286 - 12.337: 87.5581% ( 25) 00:08:54.666 12.337 - 12.389: 87.9275% ( 31) 00:08:54.666 12.389 - 12.440: 88.3208% ( 33) 00:08:54.666 12.440 - 12.492: 88.6783% ( 30) 00:08:54.666 12.492 - 12.543: 89.0835% ( 34) 00:08:54.666 12.543 - 12.594: 89.4530% ( 31) 00:08:54.666 12.594 - 12.646: 89.9178% ( 39) 00:08:54.666 12.646 - 12.697: 90.3587% ( 37) 00:08:54.666 12.697 - 12.749: 90.6447% ( 24) 00:08:54.666 12.749 - 12.800: 91.0857% ( 37) 00:08:54.666 12.800 - 12.851: 91.3836% ( 25) 00:08:54.666 12.851 - 12.903: 91.7054% ( 27) 00:08:54.666 12.903 - 12.954: 91.9914% ( 24) 00:08:54.666 12.954 - 13.006: 92.2179% ( 19) 00:08:54.666 13.006 - 13.057: 92.5396% ( 27) 00:08:54.666 13.057 - 13.108: 92.7899% ( 21) 00:08:54.666 13.108 - 13.160: 93.1236% ( 28) 00:08:54.666 13.160 - 13.263: 93.6241% ( 42) 00:08:54.666 13.263 - 13.365: 94.0889% ( 39) 00:08:54.666 13.365 - 13.468: 94.4345% ( 29) 00:08:54.666 13.468 - 13.571: 94.6729% ( 20) 00:08:54.666 13.571 - 13.674: 94.9589% ( 24) 00:08:54.666 13.674 - 13.777: 95.2330% ( 23) 00:08:54.666 13.777 - 13.880: 95.5190% ( 24) 00:08:54.666 13.880 - 13.982: 95.8289% ( 26) 00:08:54.666 13.982 - 14.085: 96.0076% ( 15) 00:08:54.666 14.085 - 14.188: 96.1268% ( 10) 00:08:54.666 14.188 - 14.291: 96.2341% ( 9) 00:08:54.666 14.291 - 14.394: 96.3294% ( 8) 00:08:54.666 14.394 - 14.496: 96.4724% ( 12) 00:08:54.666 14.496 - 14.599: 96.6869% ( 18) 00:08:54.666 14.599 - 14.702: 96.8061% ( 10) 00:08:54.666 14.702 - 14.805: 96.9134% ( 9) 00:08:54.666 14.805 - 14.908: 97.0325% ( 10) 00:08:54.666 14.908 - 15.010: 97.1994% ( 14) 00:08:54.666 15.010 - 15.113: 97.4139% ( 18) 00:08:54.666 15.113 - 15.216: 97.4735% ( 5) 00:08:54.666 15.216 - 15.319: 97.5807% ( 9) 00:08:54.666 15.319 - 15.422: 97.6522% ( 6) 00:08:54.666 15.422 - 15.524: 97.7476% ( 8) 00:08:54.666 15.524 - 15.627: 97.8072% ( 5) 00:08:54.666 15.627 - 15.730: 97.8429% ( 3) 00:08:54.666 15.730 - 15.833: 97.9383% ( 8) 00:08:54.666 15.833 - 15.936: 98.0098% ( 6) 00:08:54.666 15.936 - 16.039: 98.0455% ( 3) 00:08:54.666 16.039 - 16.141: 98.1170% ( 6) 00:08:54.666 16.141 - 16.244: 98.1409% ( 2) 00:08:54.666 16.244 - 16.347: 98.1885% ( 4) 00:08:54.666 16.347 - 16.450: 98.2243% ( 3) 00:08:54.666 16.450 - 16.553: 98.2481% ( 2) 00:08:54.666 16.655 - 16.758: 98.2720% ( 2) 00:08:54.666 16.758 - 16.861: 98.2958% ( 2) 00:08:54.666 16.861 - 16.964: 98.3196% ( 2) 00:08:54.666 16.964 - 17.067: 98.3315% ( 1) 00:08:54.666 17.169 - 17.272: 98.3554% ( 2) 00:08:54.666 17.272 - 17.375: 98.3792% ( 2) 00:08:54.666 17.375 - 17.478: 98.4031% ( 2) 00:08:54.666 17.581 - 17.684: 98.4507% ( 4) 00:08:54.666 17.684 - 17.786: 98.4626% ( 1) 00:08:54.666 17.786 - 17.889: 98.4984% ( 3) 00:08:54.666 17.992 - 18.095: 98.5103% ( 1) 00:08:54.666 18.095 - 18.198: 98.5341% ( 2) 00:08:54.666 18.403 - 18.506: 98.5461% ( 1) 00:08:54.666 18.506 - 18.609: 98.5699% ( 2) 00:08:54.666 18.917 - 19.020: 98.5818% ( 1) 00:08:54.666 19.020 - 19.123: 98.5937% ( 1) 00:08:54.666 19.843 - 19.945: 98.6056% ( 1) 00:08:54.666 19.945 - 20.048: 98.6176% ( 1) 00:08:54.666 20.254 - 20.357: 98.6295% ( 1) 00:08:54.666 20.357 - 20.459: 98.6414% ( 1) 00:08:54.666 20.459 - 20.562: 98.6652% ( 2) 00:08:54.666 20.562 - 20.665: 98.7248% ( 5) 00:08:54.666 20.665 - 20.768: 98.7725% ( 4) 00:08:54.666 20.768 - 20.871: 98.8082% ( 3) 00:08:54.666 20.871 - 20.973: 98.8559% ( 4) 00:08:54.666 20.973 - 21.076: 98.8678% ( 1) 00:08:54.666 21.076 - 21.179: 98.9036% ( 3) 00:08:54.666 21.179 - 21.282: 98.9274% ( 2) 00:08:54.666 21.282 - 21.385: 98.9632% ( 3) 00:08:54.666 21.385 - 21.488: 98.9870% ( 2) 00:08:54.666 21.488 - 21.590: 98.9989% ( 1) 00:08:54.667 21.590 - 21.693: 99.0228% ( 2) 00:08:54.667 21.693 - 21.796: 99.0704% ( 4) 00:08:54.667 21.796 - 21.899: 99.0943% ( 2) 00:08:54.667 21.899 - 22.002: 99.1300% ( 3) 00:08:54.667 22.002 - 22.104: 99.1419% ( 1) 00:08:54.667 22.104 - 22.207: 99.1658% ( 2) 00:08:54.667 22.310 - 22.413: 99.1896% ( 2) 00:08:54.667 22.516 - 22.618: 99.2134% ( 2) 00:08:54.667 22.824 - 22.927: 99.2254% ( 1) 00:08:54.667 22.927 - 23.030: 99.2373% ( 1) 00:08:54.667 23.441 - 23.544: 99.2492% ( 1) 00:08:54.667 23.647 - 23.749: 99.2611% ( 1) 00:08:54.667 23.749 - 23.852: 99.2849% ( 2) 00:08:54.667 23.955 - 24.058: 99.2969% ( 1) 00:08:54.667 24.263 - 24.366: 99.3088% ( 1) 00:08:54.667 24.469 - 24.572: 99.3207% ( 1) 00:08:54.667 24.675 - 24.778: 99.3445% ( 2) 00:08:54.667 24.778 - 24.880: 99.3684% ( 2) 00:08:54.667 24.983 - 25.086: 99.3922% ( 2) 00:08:54.667 25.086 - 25.189: 99.4160% ( 2) 00:08:54.667 25.189 - 25.292: 99.4518% ( 3) 00:08:54.667 25.394 - 25.497: 99.4756% ( 2) 00:08:54.667 25.497 - 25.600: 99.4875% ( 1) 00:08:54.667 25.806 - 25.908: 99.5114% ( 2) 00:08:54.667 25.908 - 26.011: 99.5471% ( 3) 00:08:54.667 26.114 - 26.217: 99.5591% ( 1) 00:08:54.667 26.320 - 26.525: 99.6067% ( 4) 00:08:54.667 26.525 - 26.731: 99.6186% ( 1) 00:08:54.667 26.731 - 26.937: 99.6306% ( 1) 00:08:54.667 27.142 - 27.348: 99.6425% ( 1) 00:08:54.667 27.348 - 27.553: 99.6663% ( 2) 00:08:54.667 27.965 - 28.170: 99.6901% ( 2) 00:08:54.667 28.170 - 28.376: 99.7497% ( 5) 00:08:54.667 28.582 - 28.787: 99.7616% ( 1) 00:08:54.667 28.787 - 28.993: 99.7736% ( 1) 00:08:54.667 28.993 - 29.198: 99.7855% ( 1) 00:08:54.667 30.021 - 30.227: 99.7974% ( 1) 00:08:54.667 30.227 - 30.432: 99.8093% ( 1) 00:08:54.667 30.843 - 31.049: 99.8212% ( 1) 00:08:54.667 31.255 - 31.460: 99.8451% ( 2) 00:08:54.667 31.666 - 31.871: 99.8570% ( 1) 00:08:54.667 31.871 - 32.077: 99.8689% ( 1) 00:08:54.667 32.900 - 33.105: 99.8808% ( 1) 00:08:54.667 33.311 - 33.516: 99.8927% ( 1) 00:08:54.667 34.545 - 34.750: 99.9047% ( 1) 00:08:54.667 34.750 - 34.956: 99.9166% ( 1) 00:08:54.667 34.956 - 35.161: 99.9285% ( 1) 00:08:54.667 35.161 - 35.367: 99.9404% ( 1) 00:08:54.667 35.778 - 35.984: 99.9523% ( 1) 00:08:54.667 35.984 - 36.190: 99.9642% ( 1) 00:08:54.667 37.423 - 37.629: 99.9762% ( 1) 00:08:54.667 97.465 - 97.876: 99.9881% ( 1) 00:08:54.667 203.155 - 203.978: 100.0000% ( 1) 00:08:54.667 00:08:54.667 00:08:54.667 real 0m1.319s 00:08:54.667 user 0m1.111s 00:08:54.667 sys 0m0.155s 00:08:54.667 17:43:21 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:54.667 17:43:21 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:54.667 ************************************ 00:08:54.667 END TEST nvme_overhead 00:08:54.667 ************************************ 00:08:54.667 17:43:21 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:54.667 17:43:21 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:54.667 17:43:21 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:54.667 17:43:21 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.667 ************************************ 00:08:54.667 START TEST nvme_arbitration 00:08:54.667 ************************************ 00:08:54.667 17:43:21 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:58.856 Initializing NVMe Controllers 00:08:58.856 Attached to 0000:00:10.0 00:08:58.856 Attached to 0000:00:11.0 00:08:58.856 Attached to 0000:00:13.0 00:08:58.856 Attached to 0000:00:12.0 00:08:58.856 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:58.856 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:58.856 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:58.856 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:58.856 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:58.856 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:58.857 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:58.857 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:58.857 Initialization complete. Launching workers. 00:08:58.857 Starting thread on core 1 with urgent priority queue 00:08:58.857 Starting thread on core 2 with urgent priority queue 00:08:58.857 Starting thread on core 3 with urgent priority queue 00:08:58.857 Starting thread on core 0 with urgent priority queue 00:08:58.857 QEMU NVMe Ctrl (12340 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:08:58.857 QEMU NVMe Ctrl (12342 ) core 0: 597.33 IO/s 167.41 secs/100000 ios 00:08:58.857 QEMU NVMe Ctrl (12341 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:08:58.857 QEMU NVMe Ctrl (12342 ) core 1: 618.67 IO/s 161.64 secs/100000 ios 00:08:58.857 QEMU NVMe Ctrl (12343 ) core 2: 533.33 IO/s 187.50 secs/100000 ios 00:08:58.857 QEMU NVMe Ctrl (12342 ) core 3: 533.33 IO/s 187.50 secs/100000 ios 00:08:58.857 ======================================================== 00:08:58.857 00:08:58.857 00:08:58.857 real 0m3.494s 00:08:58.857 user 0m9.530s 00:08:58.857 sys 0m0.175s 00:08:58.857 17:43:25 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.857 17:43:25 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 ************************************ 00:08:58.857 END TEST nvme_arbitration 00:08:58.857 ************************************ 00:08:58.857 17:43:25 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 ************************************ 00:08:58.857 START TEST nvme_single_aen 00:08:58.857 ************************************ 00:08:58.857 17:43:25 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:58.857 Asynchronous Event Request test 00:08:58.857 Attached to 0000:00:10.0 00:08:58.857 Attached to 0000:00:11.0 00:08:58.857 Attached to 0000:00:13.0 00:08:58.857 Attached to 0000:00:12.0 00:08:58.857 Reset controller to setup AER completions for this process 00:08:58.857 Registering asynchronous event callbacks... 00:08:58.857 Getting orig temperature thresholds of all controllers 00:08:58.857 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.857 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.857 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.857 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:58.857 Setting all controllers temperature threshold low to trigger AER 00:08:58.857 Waiting for all controllers temperature threshold to be set lower 00:08:58.857 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.857 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:58.857 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.857 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:58.857 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.857 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:58.857 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:58.857 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:58.857 Waiting for all controllers to trigger AER and reset threshold 00:08:58.857 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.857 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.857 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.857 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:58.857 Cleaning up... 00:08:58.857 00:08:58.857 real 0m0.293s 00:08:58.857 user 0m0.100s 00:08:58.857 sys 0m0.149s 00:08:58.857 17:43:25 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.857 ************************************ 00:08:58.857 END TEST nvme_single_aen 00:08:58.857 17:43:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 ************************************ 00:08:58.857 17:43:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.857 17:43:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:58.857 ************************************ 00:08:58.857 START TEST nvme_doorbell_aers 00:08:58.857 ************************************ 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:58.857 17:43:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:59.116 [2024-11-20 17:43:26.064907] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:09.155 Executing: test_write_invalid_db 00:09:09.155 Waiting for AER completion... 00:09:09.155 Failure: test_write_invalid_db 00:09:09.155 00:09:09.155 Executing: test_invalid_db_write_overflow_sq 00:09:09.155 Waiting for AER completion... 00:09:09.155 Failure: test_invalid_db_write_overflow_sq 00:09:09.155 00:09:09.155 Executing: test_invalid_db_write_overflow_cq 00:09:09.155 Waiting for AER completion... 00:09:09.155 Failure: test_invalid_db_write_overflow_cq 00:09:09.155 00:09:09.155 17:43:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:09.155 17:43:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:09.155 [2024-11-20 17:43:36.102580] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:19.136 Executing: test_write_invalid_db 00:09:19.136 Waiting for AER completion... 00:09:19.136 Failure: test_write_invalid_db 00:09:19.136 00:09:19.136 Executing: test_invalid_db_write_overflow_sq 00:09:19.136 Waiting for AER completion... 00:09:19.136 Failure: test_invalid_db_write_overflow_sq 00:09:19.136 00:09:19.136 Executing: test_invalid_db_write_overflow_cq 00:09:19.136 Waiting for AER completion... 00:09:19.136 Failure: test_invalid_db_write_overflow_cq 00:09:19.136 00:09:19.136 17:43:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:19.136 17:43:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:19.136 [2024-11-20 17:43:46.147366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:29.116 Executing: test_write_invalid_db 00:09:29.116 Waiting for AER completion... 00:09:29.116 Failure: test_write_invalid_db 00:09:29.116 00:09:29.116 Executing: test_invalid_db_write_overflow_sq 00:09:29.116 Waiting for AER completion... 00:09:29.116 Failure: test_invalid_db_write_overflow_sq 00:09:29.116 00:09:29.116 Executing: test_invalid_db_write_overflow_cq 00:09:29.116 Waiting for AER completion... 00:09:29.116 Failure: test_invalid_db_write_overflow_cq 00:09:29.116 00:09:29.116 17:43:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:29.116 17:43:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:29.116 [2024-11-20 17:43:56.250459] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.093 Executing: test_write_invalid_db 00:09:39.093 Waiting for AER completion... 00:09:39.093 Failure: test_write_invalid_db 00:09:39.093 00:09:39.093 Executing: test_invalid_db_write_overflow_sq 00:09:39.093 Waiting for AER completion... 00:09:39.093 Failure: test_invalid_db_write_overflow_sq 00:09:39.093 00:09:39.093 Executing: test_invalid_db_write_overflow_cq 00:09:39.093 Waiting for AER completion... 00:09:39.093 Failure: test_invalid_db_write_overflow_cq 00:09:39.093 00:09:39.093 00:09:39.093 real 0m40.345s 00:09:39.093 user 0m28.504s 00:09:39.093 sys 0m11.442s 00:09:39.093 17:44:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.093 17:44:05 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 ************************************ 00:09:39.093 END TEST nvme_doorbell_aers 00:09:39.093 ************************************ 00:09:39.093 17:44:06 nvme -- nvme/nvme.sh@97 -- # uname 00:09:39.093 17:44:06 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:09:39.093 17:44:06 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:39.093 17:44:06 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:39.093 17:44:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.093 17:44:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.093 ************************************ 00:09:39.093 START TEST nvme_multi_aen 00:09:39.093 ************************************ 00:09:39.093 17:44:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:39.353 [2024-11-20 17:44:06.293166] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.293258] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.293276] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.295305] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.295350] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.295364] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.296786] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.296823] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.296837] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.298220] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.298257] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 [2024-11-20 17:44:06.298271] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64514) is not found. Dropping the request. 00:09:39.353 Child process pid: 65029 00:09:39.612 [Child] Asynchronous Event Request test 00:09:39.612 [Child] Attached to 0000:00:10.0 00:09:39.612 [Child] Attached to 0000:00:11.0 00:09:39.612 [Child] Attached to 0000:00:13.0 00:09:39.612 [Child] Attached to 0000:00:12.0 00:09:39.612 [Child] Registering asynchronous event callbacks... 00:09:39.612 [Child] Getting orig temperature thresholds of all controllers 00:09:39.612 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:39.612 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.612 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.612 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.612 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.612 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.612 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.612 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.612 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.612 [Child] Cleaning up... 00:09:39.612 Asynchronous Event Request test 00:09:39.612 Attached to 0000:00:10.0 00:09:39.612 Attached to 0000:00:11.0 00:09:39.612 Attached to 0000:00:13.0 00:09:39.612 Attached to 0000:00:12.0 00:09:39.612 Reset controller to setup AER completions for this process 00:09:39.612 Registering asynchronous event callbacks... 00:09:39.612 Getting orig temperature thresholds of all controllers 00:09:39.612 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:39.612 Setting all controllers temperature threshold low to trigger AER 00:09:39.612 Waiting for all controllers temperature threshold to be set lower 00:09:39.612 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.612 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:39.613 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.613 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:39.613 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.613 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:39.613 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:39.613 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:39.613 Waiting for all controllers to trigger AER and reset threshold 00:09:39.613 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.613 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.613 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.613 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:39.613 Cleaning up... 00:09:39.613 00:09:39.613 real 0m0.622s 00:09:39.613 user 0m0.210s 00:09:39.613 sys 0m0.301s 00:09:39.613 17:44:06 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.613 17:44:06 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:39.613 ************************************ 00:09:39.613 END TEST nvme_multi_aen 00:09:39.613 ************************************ 00:09:39.613 17:44:06 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:39.613 17:44:06 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:39.613 17:44:06 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.613 17:44:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:39.613 ************************************ 00:09:39.613 START TEST nvme_startup 00:09:39.613 ************************************ 00:09:39.613 17:44:06 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:39.872 Initializing NVMe Controllers 00:09:39.872 Attached to 0000:00:10.0 00:09:39.872 Attached to 0000:00:11.0 00:09:39.872 Attached to 0000:00:13.0 00:09:39.872 Attached to 0000:00:12.0 00:09:39.872 Initialization complete. 00:09:39.872 Time used:199233.984 (us). 00:09:39.872 00:09:39.872 real 0m0.299s 00:09:39.872 user 0m0.112s 00:09:39.872 sys 0m0.143s 00:09:39.872 17:44:07 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.872 17:44:07 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:39.872 ************************************ 00:09:39.872 END TEST nvme_startup 00:09:39.872 ************************************ 00:09:40.131 17:44:07 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:40.131 17:44:07 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:40.131 17:44:07 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:40.131 17:44:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:40.131 ************************************ 00:09:40.131 START TEST nvme_multi_secondary 00:09:40.131 ************************************ 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65084 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65085 00:09:40.131 17:44:07 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:43.452 Initializing NVMe Controllers 00:09:43.452 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.452 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.452 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:43.452 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:43.452 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:43.452 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:43.452 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:43.452 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:43.452 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:43.452 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:43.452 Initialization complete. Launching workers. 00:09:43.452 ======================================================== 00:09:43.452 Latency(us) 00:09:43.452 Device Information : IOPS MiB/s Average min max 00:09:43.452 PCIE (0000:00:10.0) NSID 1 from core 1: 5003.88 19.55 3195.26 1010.91 8518.08 00:09:43.452 PCIE (0000:00:11.0) NSID 1 from core 1: 5003.88 19.55 3197.10 1064.76 8502.45 00:09:43.452 PCIE (0000:00:13.0) NSID 1 from core 1: 5003.88 19.55 3197.16 1053.83 8449.97 00:09:43.452 PCIE (0000:00:12.0) NSID 1 from core 1: 5003.88 19.55 3197.25 1054.91 9023.63 00:09:43.452 PCIE (0000:00:12.0) NSID 2 from core 1: 5003.88 19.55 3197.48 1048.93 9285.32 00:09:43.452 PCIE (0000:00:12.0) NSID 3 from core 1: 5003.88 19.55 3197.57 1054.14 9572.18 00:09:43.452 ======================================================== 00:09:43.452 Total : 30023.26 117.28 3196.97 1010.91 9572.18 00:09:43.452 00:09:43.710 Initializing NVMe Controllers 00:09:43.710 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:43.710 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:43.710 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:43.710 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:43.710 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:43.710 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:43.710 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:43.711 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:43.711 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:43.711 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:43.711 Initialization complete. Launching workers. 00:09:43.711 ======================================================== 00:09:43.711 Latency(us) 00:09:43.711 Device Information : IOPS MiB/s Average min max 00:09:43.711 PCIE (0000:00:10.0) NSID 1 from core 2: 2954.05 11.54 5414.24 1327.53 17049.47 00:09:43.711 PCIE (0000:00:11.0) NSID 1 from core 2: 2954.05 11.54 5415.85 1296.26 14343.89 00:09:43.711 PCIE (0000:00:13.0) NSID 1 from core 2: 2954.05 11.54 5415.83 1299.24 14949.17 00:09:43.711 PCIE (0000:00:12.0) NSID 1 from core 2: 2954.05 11.54 5422.98 1404.61 18240.59 00:09:43.711 PCIE (0000:00:12.0) NSID 2 from core 2: 2954.05 11.54 5423.33 1350.47 18856.80 00:09:43.711 PCIE (0000:00:12.0) NSID 3 from core 2: 2954.05 11.54 5423.81 1258.89 16756.36 00:09:43.711 ======================================================== 00:09:43.711 Total : 17724.28 69.24 5419.34 1258.89 18856.80 00:09:43.711 00:09:43.711 17:44:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65084 00:09:45.612 Initializing NVMe Controllers 00:09:45.612 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:45.612 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:45.612 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:45.612 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:45.612 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:45.612 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:45.612 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:45.612 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:45.612 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:45.612 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:45.612 Initialization complete. Launching workers. 00:09:45.612 ======================================================== 00:09:45.612 Latency(us) 00:09:45.612 Device Information : IOPS MiB/s Average min max 00:09:45.612 PCIE (0000:00:10.0) NSID 1 from core 0: 7847.73 30.66 2037.29 939.96 8991.88 00:09:45.612 PCIE (0000:00:11.0) NSID 1 from core 0: 7847.73 30.66 2038.33 978.87 8952.13 00:09:45.612 PCIE (0000:00:13.0) NSID 1 from core 0: 7847.73 30.66 2038.31 918.39 9414.95 00:09:45.612 PCIE (0000:00:12.0) NSID 1 from core 0: 7847.73 30.66 2038.26 893.18 9613.36 00:09:45.612 PCIE (0000:00:12.0) NSID 2 from core 0: 7847.73 30.66 2038.22 821.06 10373.62 00:09:45.612 PCIE (0000:00:12.0) NSID 3 from core 0: 7847.73 30.66 2038.18 744.78 10019.46 00:09:45.612 ======================================================== 00:09:45.612 Total : 47086.37 183.93 2038.10 744.78 10373.62 00:09:45.612 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65085 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65156 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65157 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:45.612 17:44:12 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:48.896 Initializing NVMe Controllers 00:09:48.896 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:48.896 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:48.896 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:48.896 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:48.896 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:48.896 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:48.896 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:48.896 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:48.896 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:48.896 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:48.896 Initialization complete. Launching workers. 00:09:48.896 ======================================================== 00:09:48.896 Latency(us) 00:09:48.896 Device Information : IOPS MiB/s Average min max 00:09:48.896 PCIE (0000:00:10.0) NSID 1 from core 1: 5047.18 19.72 3167.85 1064.57 12028.77 00:09:48.896 PCIE (0000:00:11.0) NSID 1 from core 1: 5047.18 19.72 3169.61 1100.34 9607.93 00:09:48.896 PCIE (0000:00:13.0) NSID 1 from core 1: 5047.18 19.72 3170.00 1096.36 9537.48 00:09:48.896 PCIE (0000:00:12.0) NSID 1 from core 1: 5047.18 19.72 3170.15 1093.31 9093.26 00:09:48.896 PCIE (0000:00:12.0) NSID 2 from core 1: 5047.18 19.72 3170.67 1089.92 12012.58 00:09:48.896 PCIE (0000:00:12.0) NSID 3 from core 1: 5047.18 19.72 3170.91 1090.30 12004.84 00:09:48.896 ======================================================== 00:09:48.896 Total : 30283.08 118.29 3169.86 1064.57 12028.77 00:09:48.896 00:09:49.155 Initializing NVMe Controllers 00:09:49.155 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:49.155 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:49.155 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:49.155 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:49.155 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:49.155 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:49.155 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:49.155 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:49.155 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:49.155 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:49.155 Initialization complete. Launching workers. 00:09:49.155 ======================================================== 00:09:49.155 Latency(us) 00:09:49.155 Device Information : IOPS MiB/s Average min max 00:09:49.155 PCIE (0000:00:10.0) NSID 1 from core 0: 4689.76 18.32 3409.19 1008.08 9277.12 00:09:49.155 PCIE (0000:00:11.0) NSID 1 from core 0: 4689.76 18.32 3411.09 1040.61 8904.24 00:09:49.155 PCIE (0000:00:13.0) NSID 1 from core 0: 4689.76 18.32 3411.05 1051.50 11890.26 00:09:49.155 PCIE (0000:00:12.0) NSID 1 from core 0: 4689.76 18.32 3411.01 1042.85 11995.41 00:09:49.155 PCIE (0000:00:12.0) NSID 2 from core 0: 4689.76 18.32 3411.00 1033.10 9611.34 00:09:49.155 PCIE (0000:00:12.0) NSID 3 from core 0: 4689.76 18.32 3410.98 1042.19 9224.59 00:09:49.155 ======================================================== 00:09:49.155 Total : 28138.59 109.92 3410.72 1008.08 11995.41 00:09:49.155 00:09:51.055 Initializing NVMe Controllers 00:09:51.055 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:51.055 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:51.055 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:51.055 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:51.055 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:51.055 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:51.055 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:51.055 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:51.055 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:51.055 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:51.055 Initialization complete. Launching workers. 00:09:51.055 ======================================================== 00:09:51.055 Latency(us) 00:09:51.055 Device Information : IOPS MiB/s Average min max 00:09:51.055 PCIE (0000:00:10.0) NSID 1 from core 2: 3207.11 12.53 4987.65 1050.04 21250.76 00:09:51.055 PCIE (0000:00:11.0) NSID 1 from core 2: 3207.11 12.53 4988.60 1066.82 16915.48 00:09:51.055 PCIE (0000:00:13.0) NSID 1 from core 2: 3207.11 12.53 4988.29 1094.49 17189.78 00:09:51.055 PCIE (0000:00:12.0) NSID 1 from core 2: 3207.11 12.53 4988.22 1102.75 16966.06 00:09:51.055 PCIE (0000:00:12.0) NSID 2 from core 2: 3207.11 12.53 4988.42 1095.70 17130.81 00:09:51.055 PCIE (0000:00:12.0) NSID 3 from core 2: 3207.11 12.53 4988.35 1026.45 21077.87 00:09:51.055 ======================================================== 00:09:51.055 Total : 19242.65 75.17 4988.25 1026.45 21250.76 00:09:51.055 00:09:51.055 17:44:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65156 00:09:51.055 17:44:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65157 00:09:51.055 00:09:51.055 real 0m10.881s 00:09:51.055 user 0m18.601s 00:09:51.055 sys 0m1.085s 00:09:51.055 17:44:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.055 17:44:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:51.055 ************************************ 00:09:51.055 END TEST nvme_multi_secondary 00:09:51.055 ************************************ 00:09:51.055 17:44:18 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:51.055 17:44:18 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:51.055 17:44:18 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64087 ]] 00:09:51.055 17:44:18 nvme -- common/autotest_common.sh@1094 -- # kill 64087 00:09:51.055 17:44:18 nvme -- common/autotest_common.sh@1095 -- # wait 64087 00:09:51.055 [2024-11-20 17:44:18.053617] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.053762] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.053892] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.053945] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.060428] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.060508] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.060540] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.060574] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.065292] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.065397] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.065429] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.065463] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.069650] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.069711] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.069733] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.055 [2024-11-20 17:44:18.069757] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65027) is not found. Dropping the request. 00:09:51.313 17:44:18 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:09:51.313 17:44:18 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:09:51.313 17:44:18 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:51.313 17:44:18 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:51.313 17:44:18 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.313 17:44:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:51.313 ************************************ 00:09:51.313 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:51.313 ************************************ 00:09:51.313 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:51.313 * Looking for test storage... 00:09:51.313 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:51.313 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:51.313 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:51.313 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:51.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.571 --rc genhtml_branch_coverage=1 00:09:51.571 --rc genhtml_function_coverage=1 00:09:51.571 --rc genhtml_legend=1 00:09:51.571 --rc geninfo_all_blocks=1 00:09:51.571 --rc geninfo_unexecuted_blocks=1 00:09:51.571 00:09:51.571 ' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:51.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.571 --rc genhtml_branch_coverage=1 00:09:51.571 --rc genhtml_function_coverage=1 00:09:51.571 --rc genhtml_legend=1 00:09:51.571 --rc geninfo_all_blocks=1 00:09:51.571 --rc geninfo_unexecuted_blocks=1 00:09:51.571 00:09:51.571 ' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:51.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.571 --rc genhtml_branch_coverage=1 00:09:51.571 --rc genhtml_function_coverage=1 00:09:51.571 --rc genhtml_legend=1 00:09:51.571 --rc geninfo_all_blocks=1 00:09:51.571 --rc geninfo_unexecuted_blocks=1 00:09:51.571 00:09:51.571 ' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:51.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.571 --rc genhtml_branch_coverage=1 00:09:51.571 --rc genhtml_function_coverage=1 00:09:51.571 --rc genhtml_legend=1 00:09:51.571 --rc geninfo_all_blocks=1 00:09:51.571 --rc geninfo_unexecuted_blocks=1 00:09:51.571 00:09:51.571 ' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:51.571 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65319 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65319 00:09:51.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65319 ']' 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:51.572 17:44:18 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:51.572 [2024-11-20 17:44:18.741863] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:09:51.572 [2024-11-20 17:44:18.741990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65319 ] 00:09:51.830 [2024-11-20 17:44:18.945193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:52.089 [2024-11-20 17:44:19.103269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.089 [2024-11-20 17:44:19.103358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:52.089 [2024-11-20 17:44:19.103531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.089 [2024-11-20 17:44:19.103564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:53.023 nvme0n1 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_jgIT7.txt 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:53.023 true 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732124660 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65348 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:53.023 17:44:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.553 [2024-11-20 17:44:22.136295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:55.553 [2024-11-20 17:44:22.136923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:55.553 [2024-11-20 17:44:22.137100] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:55.553 [2024-11-20 17:44:22.137273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:55.553 [2024-11-20 17:44:22.139685] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65348 00:09:55.553 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65348 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65348 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_jgIT7.txt 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:55.553 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_jgIT7.txt 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65319 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65319 ']' 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65319 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65319 00:09:55.554 killing process with pid 65319 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65319' 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65319 00:09:55.554 17:44:22 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65319 00:09:58.088 17:44:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:58.088 17:44:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:58.088 00:09:58.088 real 0m6.580s 00:09:58.088 user 0m22.797s 00:09:58.088 sys 0m0.837s 00:09:58.088 ************************************ 00:09:58.088 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:58.088 ************************************ 00:09:58.088 17:44:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.088 17:44:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:58.088 17:44:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:58.088 17:44:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:58.088 17:44:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.088 17:44:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.088 17:44:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:58.088 ************************************ 00:09:58.088 START TEST nvme_fio 00:09:58.088 ************************************ 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:09:58.088 17:44:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:58.088 17:44:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:58.088 17:44:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:58.088 17:44:24 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:58.088 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:58.088 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:58.088 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:58.088 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:58.088 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:58.088 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:58.088 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:58.366 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:58.367 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:58.625 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:58.626 17:44:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:58.626 17:44:25 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:58.884 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:58.884 fio-3.35 00:09:58.884 Starting 1 thread 00:10:03.075 00:10:03.075 test: (groupid=0, jobs=1): err= 0: pid=65506: Wed Nov 20 17:44:29 2024 00:10:03.075 read: IOPS=22.5k, BW=87.8MiB/s (92.1MB/s)(176MiB/2001msec) 00:10:03.075 slat (usec): min=3, max=108, avg= 4.30, stdev= 1.10 00:10:03.075 clat (usec): min=206, max=13122, avg=2836.57, stdev=412.15 00:10:03.075 lat (usec): min=211, max=13231, avg=2840.87, stdev=412.64 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 2114], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:10:03.075 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2802], 00:10:03.075 | 70.00th=[ 2835], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3425], 00:10:03.075 | 99.00th=[ 4113], 99.50th=[ 4490], 99.90th=[ 8160], 99.95th=[10814], 00:10:03.075 | 99.99th=[12911] 00:10:03.075 bw ( KiB/s): min=87656, max=89064, per=98.44%, avg=88514.67, stdev=753.25, samples=3 00:10:03.075 iops : min=21914, max=22266, avg=22128.67, stdev=188.31, samples=3 00:10:03.075 write: IOPS=22.3k, BW=87.3MiB/s (91.5MB/s)(175MiB/2001msec); 0 zone resets 00:10:03.075 slat (nsec): min=3883, max=32656, avg=4843.30, stdev=1062.85 00:10:03.075 clat (usec): min=186, max=13009, avg=2848.08, stdev=425.53 00:10:03.075 lat (usec): min=191, max=13032, avg=2852.92, stdev=425.97 00:10:03.075 clat percentiles (usec): 00:10:03.075 | 1.00th=[ 2114], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2704], 00:10:03.075 | 30.00th=[ 2737], 40.00th=[ 2769], 50.00th=[ 2802], 60.00th=[ 2835], 00:10:03.075 | 70.00th=[ 2868], 80.00th=[ 2900], 90.00th=[ 2999], 95.00th=[ 3458], 00:10:03.075 | 99.00th=[ 4146], 99.50th=[ 4555], 99.90th=[ 9110], 99.95th=[11207], 00:10:03.075 | 99.99th=[12780] 00:10:03.075 bw ( KiB/s): min=87248, max=90088, per=99.23%, avg=88688.00, stdev=1420.42, samples=3 00:10:03.075 iops : min=21812, max=22522, avg=22172.00, stdev=355.11, samples=3 00:10:03.075 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:10:03.075 lat (msec) : 2=0.70%, 4=97.79%, 10=1.40%, 20=0.07% 00:10:03.075 cpu : usr=99.30%, sys=0.20%, ctx=5, majf=0, minf=607 00:10:03.075 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:03.075 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:03.075 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:03.075 issued rwts: total=44981,44712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:03.075 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:03.075 00:10:03.075 Run status group 0 (all jobs): 00:10:03.075 READ: bw=87.8MiB/s (92.1MB/s), 87.8MiB/s-87.8MiB/s (92.1MB/s-92.1MB/s), io=176MiB (184MB), run=2001-2001msec 00:10:03.075 WRITE: bw=87.3MiB/s (91.5MB/s), 87.3MiB/s-87.3MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:10:03.075 ----------------------------------------------------- 00:10:03.075 Suppressions used: 00:10:03.075 count bytes template 00:10:03.075 1 32 /usr/src/fio/parse.c 00:10:03.075 1 8 libtcmalloc_minimal.so 00:10:03.075 ----------------------------------------------------- 00:10:03.075 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:03.075 17:44:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:03.075 17:44:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:03.075 17:44:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:03.075 17:44:30 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:03.335 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:03.335 fio-3.35 00:10:03.335 Starting 1 thread 00:10:07.527 00:10:07.527 test: (groupid=0, jobs=1): err= 0: pid=65572: Wed Nov 20 17:44:34 2024 00:10:07.527 read: IOPS=22.9k, BW=89.3MiB/s (93.7MB/s)(179MiB/2001msec) 00:10:07.527 slat (usec): min=3, max=142, avg= 4.23, stdev= 1.23 00:10:07.527 clat (usec): min=190, max=13056, avg=2791.04, stdev=376.50 00:10:07.527 lat (usec): min=194, max=13126, avg=2795.27, stdev=376.81 00:10:07.527 clat percentiles (usec): 00:10:07.527 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2606], 20.00th=[ 2671], 00:10:07.527 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:07.527 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:10:07.527 | 99.00th=[ 4047], 99.50th=[ 4686], 99.90th=[ 7898], 99.95th=[10814], 00:10:07.527 | 99.99th=[12780] 00:10:07.527 bw ( KiB/s): min=90344, max=92336, per=99.74%, avg=91226.67, stdev=1015.16, samples=3 00:10:07.527 iops : min=22586, max=23084, avg=22806.67, stdev=253.79, samples=3 00:10:07.527 write: IOPS=22.7k, BW=88.8MiB/s (93.1MB/s)(178MiB/2001msec); 0 zone resets 00:10:07.527 slat (usec): min=3, max=164, avg= 4.74, stdev= 1.20 00:10:07.527 clat (usec): min=223, max=12860, avg=2798.40, stdev=383.45 00:10:07.527 lat (usec): min=227, max=12878, avg=2803.14, stdev=383.72 00:10:07.527 clat percentiles (usec): 00:10:07.527 | 1.00th=[ 2147], 5.00th=[ 2573], 10.00th=[ 2638], 20.00th=[ 2671], 00:10:07.527 | 30.00th=[ 2704], 40.00th=[ 2737], 50.00th=[ 2769], 60.00th=[ 2802], 00:10:07.527 | 70.00th=[ 2835], 80.00th=[ 2868], 90.00th=[ 2933], 95.00th=[ 3032], 00:10:07.527 | 99.00th=[ 4015], 99.50th=[ 4686], 99.90th=[ 8848], 99.95th=[10945], 00:10:07.527 | 99.99th=[12649] 00:10:07.527 bw ( KiB/s): min=89768, max=92560, per=100.00%, avg=91386.67, stdev=1448.29, samples=3 00:10:07.527 iops : min=22442, max=23140, avg=22846.67, stdev=362.07, samples=3 00:10:07.527 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:10:07.527 lat (msec) : 2=0.72%, 4=98.17%, 10=0.98%, 20=0.07% 00:10:07.527 cpu : usr=98.80%, sys=0.40%, ctx=7, majf=0, minf=607 00:10:07.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:07.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:07.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:07.527 issued rwts: total=45753,45474,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:07.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:07.527 00:10:07.527 Run status group 0 (all jobs): 00:10:07.527 READ: bw=89.3MiB/s (93.7MB/s), 89.3MiB/s-89.3MiB/s (93.7MB/s-93.7MB/s), io=179MiB (187MB), run=2001-2001msec 00:10:07.527 WRITE: bw=88.8MiB/s (93.1MB/s), 88.8MiB/s-88.8MiB/s (93.1MB/s-93.1MB/s), io=178MiB (186MB), run=2001-2001msec 00:10:07.527 ----------------------------------------------------- 00:10:07.527 Suppressions used: 00:10:07.527 count bytes template 00:10:07.527 1 32 /usr/src/fio/parse.c 00:10:07.527 1 8 libtcmalloc_minimal.so 00:10:07.527 ----------------------------------------------------- 00:10:07.527 00:10:07.527 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:07.527 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:07.527 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:07.527 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:07.787 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:07.787 17:44:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:08.415 17:44:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:08.415 17:44:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.415 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.415 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:08.415 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:08.415 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:08.416 17:44:35 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:08.416 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:08.416 fio-3.35 00:10:08.416 Starting 1 thread 00:10:12.609 00:10:12.609 test: (groupid=0, jobs=1): err= 0: pid=65637: Wed Nov 20 17:44:38 2024 00:10:12.609 read: IOPS=19.3k, BW=75.4MiB/s (79.0MB/s)(151MiB/2001msec) 00:10:12.609 slat (nsec): min=4201, max=84725, avg=5457.98, stdev=1773.33 00:10:12.609 clat (usec): min=267, max=16318, avg=3295.71, stdev=697.63 00:10:12.609 lat (usec): min=272, max=16403, avg=3301.17, stdev=698.66 00:10:12.609 clat percentiles (usec): 00:10:12.609 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3064], 00:10:12.609 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:10:12.609 | 70.00th=[ 3228], 80.00th=[ 3294], 90.00th=[ 3523], 95.00th=[ 3949], 00:10:12.609 | 99.00th=[ 7242], 99.50th=[ 8291], 99.90th=[ 9634], 99.95th=[12911], 00:10:12.609 | 99.99th=[16057] 00:10:12.609 bw ( KiB/s): min=70448, max=81320, per=97.98%, avg=75624.00, stdev=5454.62, samples=3 00:10:12.609 iops : min=17612, max=20330, avg=18906.00, stdev=1363.66, samples=3 00:10:12.609 write: IOPS=19.3k, BW=75.3MiB/s (78.9MB/s)(151MiB/2001msec); 0 zone resets 00:10:12.609 slat (nsec): min=4399, max=59485, avg=5765.49, stdev=1822.86 00:10:12.609 clat (usec): min=192, max=16051, avg=3309.49, stdev=722.78 00:10:12.609 lat (usec): min=197, max=16075, avg=3315.25, stdev=723.83 00:10:12.609 clat percentiles (usec): 00:10:12.609 | 1.00th=[ 2868], 5.00th=[ 2966], 10.00th=[ 2999], 20.00th=[ 3064], 00:10:12.609 | 30.00th=[ 3097], 40.00th=[ 3130], 50.00th=[ 3163], 60.00th=[ 3195], 00:10:12.609 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3523], 95.00th=[ 3982], 00:10:12.609 | 99.00th=[ 7373], 99.50th=[ 8291], 99.90th=[10683], 99.95th=[13435], 00:10:12.609 | 99.99th=[15664] 00:10:12.609 bw ( KiB/s): min=70416, max=81344, per=98.14%, avg=75640.00, stdev=5479.79, samples=3 00:10:12.609 iops : min=17604, max=20336, avg=18910.00, stdev=1369.95, samples=3 00:10:12.609 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:12.609 lat (msec) : 2=0.05%, 4=95.42%, 10=4.38%, 20=0.10% 00:10:12.609 cpu : usr=99.15%, sys=0.20%, ctx=2, majf=0, minf=607 00:10:12.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:12.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.609 issued rwts: total=38611,38558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.609 00:10:12.609 Run status group 0 (all jobs): 00:10:12.609 READ: bw=75.4MiB/s (79.0MB/s), 75.4MiB/s-75.4MiB/s (79.0MB/s-79.0MB/s), io=151MiB (158MB), run=2001-2001msec 00:10:12.609 WRITE: bw=75.3MiB/s (78.9MB/s), 75.3MiB/s-75.3MiB/s (78.9MB/s-78.9MB/s), io=151MiB (158MB), run=2001-2001msec 00:10:12.609 ----------------------------------------------------- 00:10:12.609 Suppressions used: 00:10:12.609 count bytes template 00:10:12.609 1 32 /usr/src/fio/parse.c 00:10:12.609 1 8 libtcmalloc_minimal.so 00:10:12.609 ----------------------------------------------------- 00:10:12.609 00:10:12.609 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:12.609 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:12.609 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:12.610 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:12.610 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:12.610 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:12.610 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:12.610 17:44:39 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:10:12.610 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:10:12.870 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:12.870 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:12.870 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:10:12.870 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:12.870 17:44:39 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:12.870 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:12.870 fio-3.35 00:10:12.870 Starting 1 thread 00:10:18.167 00:10:18.167 test: (groupid=0, jobs=1): err= 0: pid=65699: Wed Nov 20 17:44:45 2024 00:10:18.167 read: IOPS=22.0k, BW=85.9MiB/s (90.1MB/s)(172MiB/2001msec) 00:10:18.167 slat (nsec): min=3756, max=64085, avg=4500.50, stdev=1171.13 00:10:18.167 clat (usec): min=183, max=12266, avg=2901.57, stdev=377.76 00:10:18.167 lat (usec): min=188, max=12330, avg=2906.07, stdev=378.21 00:10:18.167 clat percentiles (usec): 00:10:18.167 | 1.00th=[ 2343], 5.00th=[ 2606], 10.00th=[ 2671], 20.00th=[ 2737], 00:10:18.167 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2868], 00:10:18.167 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3195], 95.00th=[ 3425], 00:10:18.167 | 99.00th=[ 3884], 99.50th=[ 4752], 99.90th=[ 6718], 99.95th=[ 9503], 00:10:18.167 | 99.99th=[11994] 00:10:18.167 bw ( KiB/s): min=84104, max=89664, per=97.78%, avg=86024.00, stdev=3153.92, samples=3 00:10:18.167 iops : min=21026, max=22416, avg=21506.00, stdev=788.48, samples=3 00:10:18.167 write: IOPS=21.9k, BW=85.4MiB/s (89.5MB/s)(171MiB/2001msec); 0 zone resets 00:10:18.167 slat (nsec): min=3886, max=42092, avg=4985.54, stdev=1214.40 00:10:18.167 clat (usec): min=255, max=12037, avg=2909.00, stdev=385.19 00:10:18.167 lat (usec): min=260, max=12058, avg=2913.99, stdev=385.63 00:10:18.167 clat percentiles (usec): 00:10:18.167 | 1.00th=[ 2343], 5.00th=[ 2638], 10.00th=[ 2671], 20.00th=[ 2737], 00:10:18.167 | 30.00th=[ 2769], 40.00th=[ 2802], 50.00th=[ 2835], 60.00th=[ 2900], 00:10:18.167 | 70.00th=[ 2933], 80.00th=[ 2999], 90.00th=[ 3228], 95.00th=[ 3425], 00:10:18.167 | 99.00th=[ 3916], 99.50th=[ 4817], 99.90th=[ 7635], 99.95th=[10028], 00:10:18.167 | 99.99th=[11731] 00:10:18.167 bw ( KiB/s): min=84184, max=90176, per=98.59%, avg=86189.33, stdev=3452.58, samples=3 00:10:18.168 iops : min=21046, max=22544, avg=21547.33, stdev=863.14, samples=3 00:10:18.168 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.01% 00:10:18.168 lat (msec) : 2=0.38%, 4=98.64%, 10=0.89%, 20=0.05% 00:10:18.168 cpu : usr=99.40%, sys=0.05%, ctx=3, majf=0, minf=606 00:10:18.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:18.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.168 issued rwts: total=44010,43732,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.168 00:10:18.168 Run status group 0 (all jobs): 00:10:18.168 READ: bw=85.9MiB/s (90.1MB/s), 85.9MiB/s-85.9MiB/s (90.1MB/s-90.1MB/s), io=172MiB (180MB), run=2001-2001msec 00:10:18.168 WRITE: bw=85.4MiB/s (89.5MB/s), 85.4MiB/s-85.4MiB/s (89.5MB/s-89.5MB/s), io=171MiB (179MB), run=2001-2001msec 00:10:18.428 ----------------------------------------------------- 00:10:18.428 Suppressions used: 00:10:18.428 count bytes template 00:10:18.428 1 32 /usr/src/fio/parse.c 00:10:18.428 1 8 libtcmalloc_minimal.so 00:10:18.428 ----------------------------------------------------- 00:10:18.428 00:10:18.428 17:44:45 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:18.428 17:44:45 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:18.428 00:10:18.428 real 0m20.450s 00:10:18.428 user 0m15.019s 00:10:18.428 sys 0m6.930s 00:10:18.428 17:44:45 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.428 ************************************ 00:10:18.428 END TEST nvme_fio 00:10:18.428 ************************************ 00:10:18.428 17:44:45 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:18.428 00:10:18.428 real 1m36.171s 00:10:18.428 user 3m44.815s 00:10:18.428 sys 0m26.437s 00:10:18.428 17:44:45 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:18.428 ************************************ 00:10:18.428 17:44:45 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:18.428 END TEST nvme 00:10:18.428 ************************************ 00:10:18.428 17:44:45 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:18.428 17:44:45 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:18.428 17:44:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:18.428 17:44:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:18.428 17:44:45 -- common/autotest_common.sh@10 -- # set +x 00:10:18.428 ************************************ 00:10:18.428 START TEST nvme_scc 00:10:18.428 ************************************ 00:10:18.428 17:44:45 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:18.688 * Looking for test storage... 00:10:18.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:18.688 17:44:45 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:18.688 17:44:45 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:18.688 17:44:45 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:18.688 17:44:45 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:18.688 17:44:45 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:18.688 17:44:45 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:18.688 17:44:45 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:18.689 17:44:45 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:18.689 17:44:45 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.689 --rc genhtml_branch_coverage=1 00:10:18.689 --rc genhtml_function_coverage=1 00:10:18.689 --rc genhtml_legend=1 00:10:18.689 --rc geninfo_all_blocks=1 00:10:18.689 --rc geninfo_unexecuted_blocks=1 00:10:18.689 00:10:18.689 ' 00:10:18.689 17:44:45 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.689 --rc genhtml_branch_coverage=1 00:10:18.689 --rc genhtml_function_coverage=1 00:10:18.689 --rc genhtml_legend=1 00:10:18.689 --rc geninfo_all_blocks=1 00:10:18.689 --rc geninfo_unexecuted_blocks=1 00:10:18.689 00:10:18.689 ' 00:10:18.689 17:44:45 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.689 --rc genhtml_branch_coverage=1 00:10:18.689 --rc genhtml_function_coverage=1 00:10:18.689 --rc genhtml_legend=1 00:10:18.689 --rc geninfo_all_blocks=1 00:10:18.689 --rc geninfo_unexecuted_blocks=1 00:10:18.689 00:10:18.689 ' 00:10:18.689 17:44:45 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:18.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:18.689 --rc genhtml_branch_coverage=1 00:10:18.689 --rc genhtml_function_coverage=1 00:10:18.689 --rc genhtml_legend=1 00:10:18.689 --rc geninfo_all_blocks=1 00:10:18.689 --rc geninfo_unexecuted_blocks=1 00:10:18.689 00:10:18.689 ' 00:10:18.689 17:44:45 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.689 17:44:45 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.689 17:44:45 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.689 17:44:45 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.689 17:44:45 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.689 17:44:45 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:18.689 17:44:45 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:18.689 17:44:45 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:18.689 17:44:45 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:18.689 17:44:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:18.689 17:44:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:18.689 17:44:45 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:18.689 17:44:45 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:19.258 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:19.517 Waiting for block devices as requested 00:10:19.517 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.776 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:19.776 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:20.034 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:25.330 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:25.330 17:44:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:25.330 17:44:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:25.330 17:44:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:25.330 17:44:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:25.330 17:44:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.330 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.331 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.332 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:25.333 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.334 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.335 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.336 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:25.337 17:44:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:25.337 17:44:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:25.337 17:44:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:25.337 17:44:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.337 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.338 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.339 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.340 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.341 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.342 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:25.343 17:44:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:25.343 17:44:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:25.343 17:44:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:25.343 17:44:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.343 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:25.344 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.345 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:25.346 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.347 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:25.348 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:25.349 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:25.350 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.351 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:25.352 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:25.353 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.354 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:25.355 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:25.356 17:44:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:25.356 17:44:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:25.356 17:44:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:25.356 17:44:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:25.356 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.357 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.617 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:25.618 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:25.619 17:44:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:25.619 17:44:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:25.619 17:44:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:26.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:26.755 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:26.755 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.013 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.013 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:27.013 17:44:54 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:27.013 17:44:54 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:27.013 17:44:54 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.013 17:44:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:27.013 ************************************ 00:10:27.013 START TEST nvme_simple_copy 00:10:27.013 ************************************ 00:10:27.013 17:44:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:27.580 Initializing NVMe Controllers 00:10:27.580 Attaching to 0000:00:10.0 00:10:27.580 Controller supports SCC. Attached to 0000:00:10.0 00:10:27.580 Namespace ID: 1 size: 6GB 00:10:27.580 Initialization complete. 00:10:27.580 00:10:27.580 Controller QEMU NVMe Ctrl (12340 ) 00:10:27.580 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:27.580 Namespace Block Size:4096 00:10:27.580 Writing LBAs 0 to 63 with Random Data 00:10:27.580 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:27.580 LBAs matching Written Data: 64 00:10:27.580 00:10:27.580 real 0m0.337s 00:10:27.580 user 0m0.125s 00:10:27.580 sys 0m0.111s 00:10:27.580 17:44:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.580 17:44:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:27.580 ************************************ 00:10:27.580 END TEST nvme_simple_copy 00:10:27.580 ************************************ 00:10:27.580 00:10:27.580 real 0m9.047s 00:10:27.580 user 0m1.587s 00:10:27.580 sys 0m2.516s 00:10:27.580 17:44:54 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.580 17:44:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:27.580 ************************************ 00:10:27.580 END TEST nvme_scc 00:10:27.580 ************************************ 00:10:27.580 17:44:54 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:27.580 17:44:54 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:27.580 17:44:54 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:27.580 17:44:54 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:27.580 17:44:54 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:27.580 17:44:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:27.580 17:44:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.580 17:44:54 -- common/autotest_common.sh@10 -- # set +x 00:10:27.580 ************************************ 00:10:27.580 START TEST nvme_fdp 00:10:27.580 ************************************ 00:10:27.581 17:44:54 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:10:27.581 * Looking for test storage... 00:10:27.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:27.581 17:44:54 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:27.581 17:44:54 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:27.581 17:44:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:27.840 17:44:54 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:27.840 17:44:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:27.841 17:44:54 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:27.841 17:44:54 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:27.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.841 --rc genhtml_branch_coverage=1 00:10:27.841 --rc genhtml_function_coverage=1 00:10:27.841 --rc genhtml_legend=1 00:10:27.841 --rc geninfo_all_blocks=1 00:10:27.841 --rc geninfo_unexecuted_blocks=1 00:10:27.841 00:10:27.841 ' 00:10:27.841 17:44:54 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:27.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.841 --rc genhtml_branch_coverage=1 00:10:27.841 --rc genhtml_function_coverage=1 00:10:27.841 --rc genhtml_legend=1 00:10:27.841 --rc geninfo_all_blocks=1 00:10:27.841 --rc geninfo_unexecuted_blocks=1 00:10:27.841 00:10:27.841 ' 00:10:27.841 17:44:54 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:27.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.841 --rc genhtml_branch_coverage=1 00:10:27.841 --rc genhtml_function_coverage=1 00:10:27.841 --rc genhtml_legend=1 00:10:27.841 --rc geninfo_all_blocks=1 00:10:27.841 --rc geninfo_unexecuted_blocks=1 00:10:27.841 00:10:27.841 ' 00:10:27.841 17:44:54 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:27.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:27.841 --rc genhtml_branch_coverage=1 00:10:27.841 --rc genhtml_function_coverage=1 00:10:27.841 --rc genhtml_legend=1 00:10:27.841 --rc geninfo_all_blocks=1 00:10:27.841 --rc geninfo_unexecuted_blocks=1 00:10:27.841 00:10:27.841 ' 00:10:27.841 17:44:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:27.841 17:44:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:27.841 17:44:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.841 17:44:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.841 17:44:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.841 17:44:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:27.841 17:44:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:27.841 17:44:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:27.841 17:44:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:27.841 17:44:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:28.409 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:28.668 Waiting for block devices as requested 00:10:28.668 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.927 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.927 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:28.927 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:34.204 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:34.204 17:45:01 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:34.204 17:45:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:34.204 17:45:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:34.204 17:45:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.204 17:45:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.204 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:34.205 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:34.206 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.207 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.208 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.209 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.210 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.211 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:34.488 17:45:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:34.488 17:45:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:34.488 17:45:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.488 17:45:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.488 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.489 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:10:34.490 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:34.491 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:34.492 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:34.493 17:45:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:34.493 17:45:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:34.493 17:45:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.493 17:45:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:34.493 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.494 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.495 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:10:34.499 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:10:34.500 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:10:34.501 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.502 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.503 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:34.765 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:34.766 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.767 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:34.768 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:34.769 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:34.770 17:45:01 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:34.770 17:45:01 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:34.770 17:45:01 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:34.770 17:45:01 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:34.770 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:34.771 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.772 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:34.773 17:45:01 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:34.773 17:45:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:34.774 17:45:01 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:34.774 17:45:01 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:34.774 17:45:01 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:34.774 17:45:01 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:35.709 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:36.277 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.277 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.277 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.277 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:36.535 17:45:03 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:36.535 17:45:03 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:36.535 17:45:03 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.535 17:45:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:36.535 ************************************ 00:10:36.535 START TEST nvme_flexible_data_placement 00:10:36.535 ************************************ 00:10:36.535 17:45:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:36.794 Initializing NVMe Controllers 00:10:36.794 Attaching to 0000:00:13.0 00:10:36.794 Controller supports FDP Attached to 0000:00:13.0 00:10:36.794 Namespace ID: 1 Endurance Group ID: 1 00:10:36.794 Initialization complete. 00:10:36.794 00:10:36.794 ================================== 00:10:36.794 == FDP tests for Namespace: #01 == 00:10:36.794 ================================== 00:10:36.794 00:10:36.794 Get Feature: FDP: 00:10:36.794 ================= 00:10:36.794 Enabled: Yes 00:10:36.794 FDP configuration Index: 0 00:10:36.794 00:10:36.794 FDP configurations log page 00:10:36.794 =========================== 00:10:36.794 Number of FDP configurations: 1 00:10:36.794 Version: 0 00:10:36.794 Size: 112 00:10:36.794 FDP Configuration Descriptor: 0 00:10:36.794 Descriptor Size: 96 00:10:36.794 Reclaim Group Identifier format: 2 00:10:36.794 FDP Volatile Write Cache: Not Present 00:10:36.794 FDP Configuration: Valid 00:10:36.794 Vendor Specific Size: 0 00:10:36.794 Number of Reclaim Groups: 2 00:10:36.794 Number of Recalim Unit Handles: 8 00:10:36.794 Max Placement Identifiers: 128 00:10:36.794 Number of Namespaces Suppprted: 256 00:10:36.794 Reclaim unit Nominal Size: 6000000 bytes 00:10:36.794 Estimated Reclaim Unit Time Limit: Not Reported 00:10:36.794 RUH Desc #000: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #001: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #002: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #003: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #004: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #005: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #006: RUH Type: Initially Isolated 00:10:36.794 RUH Desc #007: RUH Type: Initially Isolated 00:10:36.794 00:10:36.794 FDP reclaim unit handle usage log page 00:10:36.794 ====================================== 00:10:36.794 Number of Reclaim Unit Handles: 8 00:10:36.794 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:36.794 RUH Usage Desc #001: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #002: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #003: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #004: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #005: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #006: RUH Attributes: Unused 00:10:36.794 RUH Usage Desc #007: RUH Attributes: Unused 00:10:36.794 00:10:36.794 FDP statistics log page 00:10:36.794 ======================= 00:10:36.794 Host bytes with metadata written: 942600192 00:10:36.794 Media bytes with metadata written: 942788608 00:10:36.794 Media bytes erased: 0 00:10:36.794 00:10:36.794 FDP Reclaim unit handle status 00:10:36.794 ============================== 00:10:36.794 Number of RUHS descriptors: 2 00:10:36.794 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003d11 00:10:36.794 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:36.794 00:10:36.795 FDP write on placement id: 0 success 00:10:36.795 00:10:36.795 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:36.795 00:10:36.795 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:36.795 00:10:36.795 Get Feature: FDP Events for Placement handle: #0 00:10:36.795 ======================== 00:10:36.795 Number of FDP Events: 6 00:10:36.795 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:36.795 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:36.795 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:36.795 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:36.795 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:36.795 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:36.795 00:10:36.795 FDP events log page 00:10:36.795 =================== 00:10:36.795 Number of FDP events: 1 00:10:36.795 FDP Event #0: 00:10:36.795 Event Type: RU Not Written to Capacity 00:10:36.795 Placement Identifier: Valid 00:10:36.795 NSID: Valid 00:10:36.795 Location: Valid 00:10:36.795 Placement Identifier: 0 00:10:36.795 Event Timestamp: 8 00:10:36.795 Namespace Identifier: 1 00:10:36.795 Reclaim Group Identifier: 0 00:10:36.795 Reclaim Unit Handle Identifier: 0 00:10:36.795 00:10:36.795 FDP test passed 00:10:36.795 00:10:36.795 real 0m0.304s 00:10:36.795 user 0m0.088s 00:10:36.795 sys 0m0.114s 00:10:36.795 17:45:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.795 17:45:03 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 ************************************ 00:10:36.795 END TEST nvme_flexible_data_placement 00:10:36.795 ************************************ 00:10:36.795 00:10:36.795 real 0m9.251s 00:10:36.795 user 0m1.716s 00:10:36.795 sys 0m2.557s 00:10:36.795 17:45:03 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.795 17:45:03 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 ************************************ 00:10:36.795 END TEST nvme_fdp 00:10:36.795 ************************************ 00:10:36.795 17:45:03 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:36.795 17:45:03 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:36.795 17:45:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:36.795 17:45:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.795 17:45:03 -- common/autotest_common.sh@10 -- # set +x 00:10:36.795 ************************************ 00:10:36.795 START TEST nvme_rpc 00:10:36.795 ************************************ 00:10:36.795 17:45:03 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:37.054 * Looking for test storage... 00:10:37.054 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.054 17:45:04 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.054 --rc genhtml_branch_coverage=1 00:10:37.054 --rc genhtml_function_coverage=1 00:10:37.054 --rc genhtml_legend=1 00:10:37.054 --rc geninfo_all_blocks=1 00:10:37.054 --rc geninfo_unexecuted_blocks=1 00:10:37.054 00:10:37.054 ' 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.054 --rc genhtml_branch_coverage=1 00:10:37.054 --rc genhtml_function_coverage=1 00:10:37.054 --rc genhtml_legend=1 00:10:37.054 --rc geninfo_all_blocks=1 00:10:37.054 --rc geninfo_unexecuted_blocks=1 00:10:37.054 00:10:37.054 ' 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.054 --rc genhtml_branch_coverage=1 00:10:37.054 --rc genhtml_function_coverage=1 00:10:37.054 --rc genhtml_legend=1 00:10:37.054 --rc geninfo_all_blocks=1 00:10:37.054 --rc geninfo_unexecuted_blocks=1 00:10:37.054 00:10:37.054 ' 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.054 --rc genhtml_branch_coverage=1 00:10:37.054 --rc genhtml_function_coverage=1 00:10:37.054 --rc genhtml_legend=1 00:10:37.054 --rc geninfo_all_blocks=1 00:10:37.054 --rc geninfo_unexecuted_blocks=1 00:10:37.054 00:10:37.054 ' 00:10:37.054 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.054 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:37.054 17:45:04 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:10:37.313 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:37.313 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67105 00:10:37.313 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:37.313 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:37.313 17:45:04 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67105 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67105 ']' 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:37.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.313 17:45:04 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:37.314 17:45:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:37.314 [2024-11-20 17:45:04.405537] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:37.314 [2024-11-20 17:45:04.405662] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67105 ] 00:10:37.572 [2024-11-20 17:45:04.587912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:37.572 [2024-11-20 17:45:04.706392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:37.572 [2024-11-20 17:45:04.706430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:38.509 17:45:05 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:38.509 17:45:05 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:38.509 17:45:05 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:38.768 Nvme0n1 00:10:39.027 17:45:05 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:39.027 17:45:05 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:39.027 request: 00:10:39.027 { 00:10:39.027 "bdev_name": "Nvme0n1", 00:10:39.027 "filename": "non_existing_file", 00:10:39.027 "method": "bdev_nvme_apply_firmware", 00:10:39.027 "req_id": 1 00:10:39.027 } 00:10:39.027 Got JSON-RPC error response 00:10:39.027 response: 00:10:39.027 { 00:10:39.027 "code": -32603, 00:10:39.027 "message": "open file failed." 00:10:39.027 } 00:10:39.027 17:45:06 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:39.027 17:45:06 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:39.027 17:45:06 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:39.287 17:45:06 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:39.287 17:45:06 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67105 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67105 ']' 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67105 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67105 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:39.287 killing process with pid 67105 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67105' 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67105 00:10:39.287 17:45:06 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67105 00:10:41.821 00:10:41.821 real 0m4.823s 00:10:41.821 user 0m8.865s 00:10:41.821 sys 0m0.812s 00:10:41.822 17:45:08 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:41.822 17:45:08 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:41.822 ************************************ 00:10:41.822 END TEST nvme_rpc 00:10:41.822 ************************************ 00:10:41.822 17:45:08 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:41.822 17:45:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:41.822 17:45:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:41.822 17:45:08 -- common/autotest_common.sh@10 -- # set +x 00:10:41.822 ************************************ 00:10:41.822 START TEST nvme_rpc_timeouts 00:10:41.822 ************************************ 00:10:41.822 17:45:08 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:41.822 * Looking for test storage... 00:10:41.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:41.822 17:45:08 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:41.822 17:45:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:10:41.822 17:45:08 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:42.080 17:45:09 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.080 --rc genhtml_branch_coverage=1 00:10:42.080 --rc genhtml_function_coverage=1 00:10:42.080 --rc genhtml_legend=1 00:10:42.080 --rc geninfo_all_blocks=1 00:10:42.080 --rc geninfo_unexecuted_blocks=1 00:10:42.080 00:10:42.080 ' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.080 --rc genhtml_branch_coverage=1 00:10:42.080 --rc genhtml_function_coverage=1 00:10:42.080 --rc genhtml_legend=1 00:10:42.080 --rc geninfo_all_blocks=1 00:10:42.080 --rc geninfo_unexecuted_blocks=1 00:10:42.080 00:10:42.080 ' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.080 --rc genhtml_branch_coverage=1 00:10:42.080 --rc genhtml_function_coverage=1 00:10:42.080 --rc genhtml_legend=1 00:10:42.080 --rc geninfo_all_blocks=1 00:10:42.080 --rc geninfo_unexecuted_blocks=1 00:10:42.080 00:10:42.080 ' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:42.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:42.080 --rc genhtml_branch_coverage=1 00:10:42.080 --rc genhtml_function_coverage=1 00:10:42.080 --rc genhtml_legend=1 00:10:42.080 --rc geninfo_all_blocks=1 00:10:42.080 --rc geninfo_unexecuted_blocks=1 00:10:42.080 00:10:42.080 ' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67187 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67187 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67219 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:42.080 17:45:09 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67219 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67219 ']' 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.080 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.081 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.081 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.081 17:45:09 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:42.081 [2024-11-20 17:45:09.181287] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:10:42.081 [2024-11-20 17:45:09.181414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67219 ] 00:10:42.339 [2024-11-20 17:45:09.361395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:42.339 [2024-11-20 17:45:09.479728] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.339 [2024-11-20 17:45:09.479814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.274 Checking default timeout settings: 00:10:43.274 17:45:10 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.274 17:45:10 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:10:43.274 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:43.274 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:43.533 Making settings changes with rpc: 00:10:43.533 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:43.533 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:43.792 Check default vs. modified settings: 00:10:43.792 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:43.792 17:45:10 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67187 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67187 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:44.358 Setting action_on_timeout is changed as expected. 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67187 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.358 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67187 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:44.359 Setting timeout_us is changed as expected. 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67187 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67187 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:44.359 Setting timeout_admin_us is changed as expected. 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67187 /tmp/settings_modified_67187 00:10:44.359 17:45:11 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67219 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67219 ']' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67219 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67219 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:44.359 killing process with pid 67219 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67219' 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67219 00:10:44.359 17:45:11 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67219 00:10:46.894 RPC TIMEOUT SETTING TEST PASSED. 00:10:46.894 17:45:13 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:46.894 00:10:46.894 real 0m4.972s 00:10:46.894 user 0m9.346s 00:10:46.894 sys 0m0.824s 00:10:46.894 17:45:13 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:46.894 17:45:13 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:46.894 ************************************ 00:10:46.894 END TEST nvme_rpc_timeouts 00:10:46.894 ************************************ 00:10:46.894 17:45:13 -- spdk/autotest.sh@239 -- # uname -s 00:10:46.894 17:45:13 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:46.894 17:45:13 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:46.894 17:45:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:46.894 17:45:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:46.894 17:45:13 -- common/autotest_common.sh@10 -- # set +x 00:10:46.894 ************************************ 00:10:46.894 START TEST sw_hotplug 00:10:46.894 ************************************ 00:10:46.894 17:45:13 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:46.894 * Looking for test storage... 00:10:46.894 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.894 17:45:14 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:46.894 17:45:14 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:10:46.894 17:45:14 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:47.154 17:45:14 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.154 17:45:14 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:47.154 17:45:14 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.154 17:45:14 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.154 --rc genhtml_branch_coverage=1 00:10:47.154 --rc genhtml_function_coverage=1 00:10:47.154 --rc genhtml_legend=1 00:10:47.154 --rc geninfo_all_blocks=1 00:10:47.154 --rc geninfo_unexecuted_blocks=1 00:10:47.154 00:10:47.154 ' 00:10:47.154 17:45:14 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:47.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.154 --rc genhtml_branch_coverage=1 00:10:47.154 --rc genhtml_function_coverage=1 00:10:47.155 --rc genhtml_legend=1 00:10:47.155 --rc geninfo_all_blocks=1 00:10:47.155 --rc geninfo_unexecuted_blocks=1 00:10:47.155 00:10:47.155 ' 00:10:47.155 17:45:14 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:47.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.155 --rc genhtml_branch_coverage=1 00:10:47.155 --rc genhtml_function_coverage=1 00:10:47.155 --rc genhtml_legend=1 00:10:47.155 --rc geninfo_all_blocks=1 00:10:47.155 --rc geninfo_unexecuted_blocks=1 00:10:47.155 00:10:47.155 ' 00:10:47.155 17:45:14 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:47.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.155 --rc genhtml_branch_coverage=1 00:10:47.155 --rc genhtml_function_coverage=1 00:10:47.155 --rc genhtml_legend=1 00:10:47.155 --rc geninfo_all_blocks=1 00:10:47.155 --rc geninfo_unexecuted_blocks=1 00:10:47.155 00:10:47.155 ' 00:10:47.155 17:45:14 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:47.722 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.722 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.723 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.723 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.723 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:47.982 17:45:14 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:47.982 17:45:14 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:47.982 17:45:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:47.982 17:45:14 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:47.982 17:45:14 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:47.983 17:45:14 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:47.983 17:45:15 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:47.983 17:45:15 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:47.983 17:45:15 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:47.983 17:45:15 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:47.983 17:45:15 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:48.551 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:48.810 Waiting for block devices as requested 00:10:48.810 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.810 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.069 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:49.069 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:54.337 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:54.337 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:54.337 17:45:21 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:54.905 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:54.905 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.905 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:55.164 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:55.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68107 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:55.681 17:45:22 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:55.681 17:45:22 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:55.940 Initializing NVMe Controllers 00:10:55.940 Attaching to 0000:00:10.0 00:10:55.940 Attaching to 0000:00:11.0 00:10:55.940 Attached to 0000:00:11.0 00:10:55.940 Attached to 0000:00:10.0 00:10:55.940 Initialization complete. Starting I/O... 00:10:55.940 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:55.940 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:55.940 00:10:57.314 QEMU NVMe Ctrl (12341 ): 1448 I/Os completed (+1448) 00:10:57.314 QEMU NVMe Ctrl (12340 ): 1448 I/Os completed (+1448) 00:10:57.314 00:10:58.250 QEMU NVMe Ctrl (12341 ): 3408 I/Os completed (+1960) 00:10:58.250 QEMU NVMe Ctrl (12340 ): 3408 I/Os completed (+1960) 00:10:58.250 00:10:59.186 QEMU NVMe Ctrl (12341 ): 5575 I/Os completed (+2167) 00:10:59.186 QEMU NVMe Ctrl (12340 ): 5572 I/Os completed (+2164) 00:10:59.186 00:11:00.122 QEMU NVMe Ctrl (12341 ): 7691 I/Os completed (+2116) 00:11:00.122 QEMU NVMe Ctrl (12340 ): 7688 I/Os completed (+2116) 00:11:00.122 00:11:01.058 QEMU NVMe Ctrl (12341 ): 9751 I/Os completed (+2060) 00:11:01.058 QEMU NVMe Ctrl (12340 ): 9750 I/Os completed (+2062) 00:11:01.058 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.995 [2024-11-20 17:45:28.845146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:01.995 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:01.995 [2024-11-20 17:45:28.847170] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.847225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.847248] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.847272] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:01.995 [2024-11-20 17:45:28.850209] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.850268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.850288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.850309] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:01.995 [2024-11-20 17:45:28.884243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:01.995 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:01.995 [2024-11-20 17:45:28.885954] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.886004] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.886034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.886066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:01.995 [2024-11-20 17:45:28.888713] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.888756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.888787] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 [2024-11-20 17:45:28.888804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:01.995 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:01.995 EAL: Scan for (pci) bus failed. 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:01.995 17:45:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:01.995 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:01.995 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:01.995 Attaching to 0000:00:10.0 00:11:01.995 Attached to 0000:00:10.0 00:11:02.253 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:02.253 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:02.253 17:45:29 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:02.253 Attaching to 0000:00:11.0 00:11:02.253 Attached to 0000:00:11.0 00:11:03.190 QEMU NVMe Ctrl (12340 ): 1968 I/Os completed (+1968) 00:11:03.190 QEMU NVMe Ctrl (12341 ): 1736 I/Os completed (+1736) 00:11:03.190 00:11:04.179 QEMU NVMe Ctrl (12340 ): 4172 I/Os completed (+2204) 00:11:04.179 QEMU NVMe Ctrl (12341 ): 3940 I/Os completed (+2204) 00:11:04.179 00:11:05.117 QEMU NVMe Ctrl (12340 ): 6368 I/Os completed (+2196) 00:11:05.117 QEMU NVMe Ctrl (12341 ): 6136 I/Os completed (+2196) 00:11:05.117 00:11:06.054 QEMU NVMe Ctrl (12340 ): 8572 I/Os completed (+2204) 00:11:06.054 QEMU NVMe Ctrl (12341 ): 8340 I/Os completed (+2204) 00:11:06.054 00:11:06.997 QEMU NVMe Ctrl (12340 ): 10720 I/Os completed (+2148) 00:11:06.997 QEMU NVMe Ctrl (12341 ): 10488 I/Os completed (+2148) 00:11:06.997 00:11:07.933 QEMU NVMe Ctrl (12340 ): 12823 I/Os completed (+2103) 00:11:07.933 QEMU NVMe Ctrl (12341 ): 12637 I/Os completed (+2149) 00:11:07.933 00:11:09.310 QEMU NVMe Ctrl (12340 ): 14959 I/Os completed (+2136) 00:11:09.310 QEMU NVMe Ctrl (12341 ): 14773 I/Os completed (+2136) 00:11:09.310 00:11:10.261 QEMU NVMe Ctrl (12340 ): 17103 I/Os completed (+2144) 00:11:10.261 QEMU NVMe Ctrl (12341 ): 16917 I/Os completed (+2144) 00:11:10.261 00:11:11.195 QEMU NVMe Ctrl (12340 ): 19251 I/Os completed (+2148) 00:11:11.195 QEMU NVMe Ctrl (12341 ): 19065 I/Os completed (+2148) 00:11:11.195 00:11:12.129 QEMU NVMe Ctrl (12340 ): 21407 I/Os completed (+2156) 00:11:12.129 QEMU NVMe Ctrl (12341 ): 21221 I/Os completed (+2156) 00:11:12.129 00:11:13.063 QEMU NVMe Ctrl (12340 ): 23607 I/Os completed (+2200) 00:11:13.063 QEMU NVMe Ctrl (12341 ): 23421 I/Os completed (+2200) 00:11:13.063 00:11:13.995 QEMU NVMe Ctrl (12340 ): 25755 I/Os completed (+2148) 00:11:13.995 QEMU NVMe Ctrl (12341 ): 25569 I/Os completed (+2148) 00:11:13.995 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.253 [2024-11-20 17:45:41.258191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:14.253 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:14.253 [2024-11-20 17:45:41.260255] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.260423] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.260493] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.260621] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:14.253 [2024-11-20 17:45:41.263825] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.263976] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.264030] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.264052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:14.253 [2024-11-20 17:45:41.292991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:14.253 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:14.253 [2024-11-20 17:45:41.295212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.295379] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.295462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.295491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:14.253 [2024-11-20 17:45:41.298392] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.298441] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.298467] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 [2024-11-20 17:45:41.298490] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.253 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:14.511 Attaching to 0000:00:10.0 00:11:14.511 Attached to 0000:00:10.0 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:14.511 17:45:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:14.511 Attaching to 0000:00:11.0 00:11:14.511 Attached to 0000:00:11.0 00:11:15.077 QEMU NVMe Ctrl (12340 ): 1028 I/Os completed (+1028) 00:11:15.077 QEMU NVMe Ctrl (12341 ): 852 I/Os completed (+852) 00:11:15.077 00:11:16.087 QEMU NVMe Ctrl (12340 ): 3236 I/Os completed (+2208) 00:11:16.087 QEMU NVMe Ctrl (12341 ): 3060 I/Os completed (+2208) 00:11:16.087 00:11:17.025 QEMU NVMe Ctrl (12340 ): 5424 I/Os completed (+2188) 00:11:17.025 QEMU NVMe Ctrl (12341 ): 5248 I/Os completed (+2188) 00:11:17.025 00:11:17.960 QEMU NVMe Ctrl (12340 ): 7640 I/Os completed (+2216) 00:11:17.960 QEMU NVMe Ctrl (12341 ): 7466 I/Os completed (+2218) 00:11:17.960 00:11:18.894 QEMU NVMe Ctrl (12340 ): 9848 I/Os completed (+2208) 00:11:18.894 QEMU NVMe Ctrl (12341 ): 9675 I/Os completed (+2209) 00:11:18.894 00:11:20.266 QEMU NVMe Ctrl (12340 ): 12044 I/Os completed (+2196) 00:11:20.266 QEMU NVMe Ctrl (12341 ): 11873 I/Os completed (+2198) 00:11:20.266 00:11:21.201 QEMU NVMe Ctrl (12340 ): 14272 I/Os completed (+2228) 00:11:21.201 QEMU NVMe Ctrl (12341 ): 14101 I/Os completed (+2228) 00:11:21.201 00:11:22.135 QEMU NVMe Ctrl (12340 ): 16492 I/Os completed (+2220) 00:11:22.135 QEMU NVMe Ctrl (12341 ): 16321 I/Os completed (+2220) 00:11:22.135 00:11:23.066 QEMU NVMe Ctrl (12340 ): 18716 I/Os completed (+2224) 00:11:23.066 QEMU NVMe Ctrl (12341 ): 18545 I/Os completed (+2224) 00:11:23.066 00:11:23.999 QEMU NVMe Ctrl (12340 ): 20764 I/Os completed (+2048) 00:11:23.999 QEMU NVMe Ctrl (12341 ): 20595 I/Os completed (+2050) 00:11:23.999 00:11:24.934 QEMU NVMe Ctrl (12340 ): 22876 I/Os completed (+2112) 00:11:24.934 QEMU NVMe Ctrl (12341 ): 22708 I/Os completed (+2113) 00:11:24.934 00:11:25.868 QEMU NVMe Ctrl (12340 ): 25036 I/Os completed (+2160) 00:11:25.868 QEMU NVMe Ctrl (12341 ): 24868 I/Os completed (+2160) 00:11:25.868 00:11:26.803 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:26.803 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:26.803 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.803 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.803 [2024-11-20 17:45:53.638798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:26.804 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:26.804 [2024-11-20 17:45:53.642489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.642661] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.642814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.642928] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:26.804 [2024-11-20 17:45:53.646067] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.646212] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.646268] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.646340] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:26.804 [2024-11-20 17:45:53.678368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:26.804 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:26.804 [2024-11-20 17:45:53.680224] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.680326] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.680380] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.680489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:26.804 [2024-11-20 17:45:53.683402] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.683534] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.683634] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 [2024-11-20 17:45:53.683725] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:26.804 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:26.804 EAL: Scan for (pci) bus failed. 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:26.804 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.804 Attaching to 0000:00:10.0 00:11:26.804 Attached to 0000:00:10.0 00:11:27.062 17:45:53 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:27.062 17:45:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:27.062 17:45:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:27.062 Attaching to 0000:00:11.0 00:11:27.062 Attached to 0000:00:11.0 00:11:27.062 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:27.062 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:27.062 [2024-11-20 17:45:54.030185] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:39.315 17:46:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:39.315 17:46:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:39.315 17:46:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.19 00:11:39.315 17:46:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.19 00:11:39.315 17:46:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:39.315 17:46:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.19 00:11:39.315 17:46:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.19 2 00:11:39.315 remove_attach_helper took 43.19s to complete (handling 2 nvme drive(s)) 17:46:06 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68107 00:11:45.874 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68107) - No such process 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68107 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68656 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:45.874 17:46:12 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68656 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 68656 ']' 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.874 17:46:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.874 [2024-11-20 17:46:12.143007] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:11:45.874 [2024-11-20 17:46:12.143338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68656 ] 00:11:45.874 [2024-11-20 17:46:12.322680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.874 [2024-11-20 17:46:12.436322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:46.440 17:46:13 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:46.440 17:46:13 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.991 [2024-11-20 17:46:19.419836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.991 [2024-11-20 17:46:19.422419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.422467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.422485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.422514] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.422526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.422542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.422557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.991 [2024-11-20 17:46:19.422572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.422584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.422605] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.422617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.422632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:52.991 [2024-11-20 17:46:19.919014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:52.991 [2024-11-20 17:46:19.921436] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.921482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.921503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.921528] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.921544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.921557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.921572] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.921584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.921598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 [2024-11-20 17:46:19.921611] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:52.991 [2024-11-20 17:46:19.921624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:52.991 [2024-11-20 17:46:19.921636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:52.991 17:46:19 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:52.991 17:46:19 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.991 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:52.991 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:52.991 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:52.991 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:52.991 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:53.250 17:46:20 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:05.481 [2024-11-20 17:46:32.498786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:05.481 [2024-11-20 17:46:32.501615] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.481 [2024-11-20 17:46:32.501793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.481 [2024-11-20 17:46:32.501885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.481 [2024-11-20 17:46:32.501919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.481 [2024-11-20 17:46:32.501933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.481 [2024-11-20 17:46:32.501947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.481 [2024-11-20 17:46:32.501961] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.481 [2024-11-20 17:46:32.501974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.481 [2024-11-20 17:46:32.501986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.481 [2024-11-20 17:46:32.502001] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.481 [2024-11-20 17:46:32.502012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.481 [2024-11-20 17:46:32.502026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.481 17:46:32 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:05.481 17:46:32 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:05.740 [2024-11-20 17:46:32.898130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:05.740 [2024-11-20 17:46:32.900649] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.740 [2024-11-20 17:46:32.900694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.740 [2024-11-20 17:46:32.900717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.740 [2024-11-20 17:46:32.900742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.740 [2024-11-20 17:46:32.900757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.740 [2024-11-20 17:46:32.900780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.740 [2024-11-20 17:46:32.900797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.740 [2024-11-20 17:46:32.900809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.740 [2024-11-20 17:46:32.900823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.740 [2024-11-20 17:46:32.900837] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:05.740 [2024-11-20 17:46:32.900850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:05.740 [2024-11-20 17:46:32.900862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:05.998 17:46:33 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.998 17:46:33 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:05.998 17:46:33 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:05.998 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:06.257 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:06.515 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:06.515 17:46:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.777 17:46:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.777 17:46:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.777 17:46:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:18.777 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:18.777 17:46:45 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.777 17:46:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:18.777 [2024-11-20 17:46:45.577756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:18.777 [2024-11-20 17:46:45.580318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.777 [2024-11-20 17:46:45.580366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.778 [2024-11-20 17:46:45.580383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.778 [2024-11-20 17:46:45.580411] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.778 [2024-11-20 17:46:45.580423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.778 [2024-11-20 17:46:45.580441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.778 [2024-11-20 17:46:45.580454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.778 [2024-11-20 17:46:45.580468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.778 [2024-11-20 17:46:45.580479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.778 [2024-11-20 17:46:45.580494] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:18.778 [2024-11-20 17:46:45.580506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:18.778 [2024-11-20 17:46:45.580519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:18.778 17:46:45 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.778 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:18.778 17:46:45 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.037 [2024-11-20 17:46:46.076976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:19.037 [2024-11-20 17:46:46.079587] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.037 [2024-11-20 17:46:46.079648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.037 [2024-11-20 17:46:46.079685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.037 [2024-11-20 17:46:46.079712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.037 [2024-11-20 17:46:46.079728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.037 [2024-11-20 17:46:46.079741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.037 [2024-11-20 17:46:46.079758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.037 [2024-11-20 17:46:46.079770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.037 [2024-11-20 17:46:46.079800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.037 [2024-11-20 17:46:46.079816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.037 [2024-11-20 17:46:46.079830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.037 [2024-11-20 17:46:46.079843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.037 17:46:46 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.037 17:46:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.037 17:46:46 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:19.037 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:19.297 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:19.556 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:19.556 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:19.556 17:46:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.21 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.21 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:12:31.812 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:12:31.812 17:46:58 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:31.812 17:46:58 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.374 17:47:04 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.374 17:47:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.374 [2024-11-20 17:47:04.669696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:38.374 [2024-11-20 17:47:04.672044] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:04.672089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:04.672105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:04.672133] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:04.672146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:04.672161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:04.672174] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:04.672188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:04.672199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:04.672214] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:04.672225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:04.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 17:47:04 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:38.374 17:47:04 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:38.374 [2024-11-20 17:47:05.168890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:38.374 [2024-11-20 17:47:05.173477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:05.173533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:05.173557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:05.173581] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:05.173598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:05.173610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:05.173629] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:05.173640] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:05.173657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 [2024-11-20 17:47:05.173671] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:38.374 [2024-11-20 17:47:05.173687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:38.374 [2024-11-20 17:47:05.173700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:38.374 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:38.374 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:38.374 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:38.374 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:38.374 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:38.375 17:47:05 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.375 17:47:05 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:38.375 17:47:05 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:38.375 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:38.633 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:38.633 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:38.633 17:47:05 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:50.880 [2024-11-20 17:47:17.748668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:50.880 [2024-11-20 17:47:17.751146] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.880 [2024-11-20 17:47:17.751198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.880 [2024-11-20 17:47:17.751215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.880 [2024-11-20 17:47:17.751243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.880 [2024-11-20 17:47:17.751255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.880 [2024-11-20 17:47:17.751269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.880 [2024-11-20 17:47:17.751282] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.880 [2024-11-20 17:47:17.751296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.880 [2024-11-20 17:47:17.751309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.880 [2024-11-20 17:47:17.751324] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:50.880 [2024-11-20 17:47:17.751335] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:50.880 [2024-11-20 17:47:17.751351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:50.880 17:47:17 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:50.880 17:47:17 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:51.139 [2024-11-20 17:47:18.148016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:51.139 [2024-11-20 17:47:18.149793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.139 [2024-11-20 17:47:18.149833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.139 [2024-11-20 17:47:18.149852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.139 [2024-11-20 17:47:18.149878] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.140 [2024-11-20 17:47:18.149898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.140 [2024-11-20 17:47:18.149911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.140 [2024-11-20 17:47:18.149926] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.140 [2024-11-20 17:47:18.149937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.140 [2024-11-20 17:47:18.149951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.140 [2024-11-20 17:47:18.149965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:51.140 [2024-11-20 17:47:18.149979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:51.140 [2024-11-20 17:47:18.149991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:51.140 17:47:18 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.140 17:47:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:51.140 17:47:18 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:51.140 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:51.399 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:51.659 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:51.659 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:51.659 17:47:18 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.894 17:47:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.894 17:47:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.894 17:47:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.894 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:03.895 [2024-11-20 17:47:30.727797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:03.895 [2024-11-20 17:47:30.732567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.895 [2024-11-20 17:47:30.732620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.895 [2024-11-20 17:47:30.732637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.895 [2024-11-20 17:47:30.732663] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.895 [2024-11-20 17:47:30.732675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.895 [2024-11-20 17:47:30.732690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.895 [2024-11-20 17:47:30.732704] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.895 [2024-11-20 17:47:30.732720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.895 [2024-11-20 17:47:30.732732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.895 [2024-11-20 17:47:30.732748] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:03.895 [2024-11-20 17:47:30.732759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:03.895 [2024-11-20 17:47:30.732785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:03.895 17:47:30 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.895 17:47:30 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:03.895 17:47:30 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:03.895 17:47:30 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:04.154 [2024-11-20 17:47:31.127144] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:04.154 [2024-11-20 17:47:31.128895] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.154 [2024-11-20 17:47:31.128936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.154 [2024-11-20 17:47:31.128956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.154 [2024-11-20 17:47:31.128980] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.154 [2024-11-20 17:47:31.128995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.154 [2024-11-20 17:47:31.129007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.154 [2024-11-20 17:47:31.129022] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.154 [2024-11-20 17:47:31.129034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.154 [2024-11-20 17:47:31.129048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.154 [2024-11-20 17:47:31.129061] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:04.154 [2024-11-20 17:47:31.129079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:04.154 [2024-11-20 17:47:31.129092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:04.154 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:04.154 17:47:31 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.154 17:47:31 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:04.154 17:47:31 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:04.413 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:04.672 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:04.672 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:04.672 17:47:31 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.13 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.13 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.13 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.13 2 00:13:16.970 remove_attach_helper took 45.13s to complete (handling 2 nvme drive(s)) 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:16.970 17:47:43 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68656 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 68656 ']' 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 68656 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68656 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:16.970 killing process with pid 68656 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68656' 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@973 -- # kill 68656 00:13:16.970 17:47:43 sw_hotplug -- common/autotest_common.sh@978 -- # wait 68656 00:13:18.876 17:47:46 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:19.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:20.011 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.011 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:20.011 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.270 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:20.270 00:13:20.270 real 2m33.397s 00:13:20.270 user 1m51.093s 00:13:20.270 sys 0m22.530s 00:13:20.270 17:47:47 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:20.270 17:47:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.270 ************************************ 00:13:20.270 END TEST sw_hotplug 00:13:20.270 ************************************ 00:13:20.270 17:47:47 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:20.270 17:47:47 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:20.270 17:47:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:20.270 17:47:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:20.270 17:47:47 -- common/autotest_common.sh@10 -- # set +x 00:13:20.270 ************************************ 00:13:20.270 START TEST nvme_xnvme 00:13:20.270 ************************************ 00:13:20.270 17:47:47 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:20.530 * Looking for test storage... 00:13:20.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.530 17:47:47 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.530 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.530 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.530 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.530 17:47:47 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.531 17:47:47 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.531 --rc genhtml_branch_coverage=1 00:13:20.531 --rc genhtml_function_coverage=1 00:13:20.531 --rc genhtml_legend=1 00:13:20.531 --rc geninfo_all_blocks=1 00:13:20.531 --rc geninfo_unexecuted_blocks=1 00:13:20.531 00:13:20.531 ' 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.531 --rc genhtml_branch_coverage=1 00:13:20.531 --rc genhtml_function_coverage=1 00:13:20.531 --rc genhtml_legend=1 00:13:20.531 --rc geninfo_all_blocks=1 00:13:20.531 --rc geninfo_unexecuted_blocks=1 00:13:20.531 00:13:20.531 ' 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.531 --rc genhtml_branch_coverage=1 00:13:20.531 --rc genhtml_function_coverage=1 00:13:20.531 --rc genhtml_legend=1 00:13:20.531 --rc geninfo_all_blocks=1 00:13:20.531 --rc geninfo_unexecuted_blocks=1 00:13:20.531 00:13:20.531 ' 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.531 --rc genhtml_branch_coverage=1 00:13:20.531 --rc genhtml_function_coverage=1 00:13:20.531 --rc genhtml_legend=1 00:13:20.531 --rc geninfo_all_blocks=1 00:13:20.531 --rc geninfo_unexecuted_blocks=1 00:13:20.531 00:13:20.531 ' 00:13:20.531 17:47:47 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:13:20.531 17:47:47 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:13:20.531 17:47:47 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:20.531 17:47:47 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:20.532 17:47:47 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:20.532 17:47:47 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:13:20.532 17:47:47 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:20.532 #define SPDK_CONFIG_H 00:13:20.532 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:20.532 #define SPDK_CONFIG_APPS 1 00:13:20.532 #define SPDK_CONFIG_ARCH native 00:13:20.532 #define SPDK_CONFIG_ASAN 1 00:13:20.532 #undef SPDK_CONFIG_AVAHI 00:13:20.532 #undef SPDK_CONFIG_CET 00:13:20.532 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:20.532 #define SPDK_CONFIG_COVERAGE 1 00:13:20.532 #define SPDK_CONFIG_CROSS_PREFIX 00:13:20.532 #undef SPDK_CONFIG_CRYPTO 00:13:20.532 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:20.532 #undef SPDK_CONFIG_CUSTOMOCF 00:13:20.532 #undef SPDK_CONFIG_DAOS 00:13:20.532 #define SPDK_CONFIG_DAOS_DIR 00:13:20.532 #define SPDK_CONFIG_DEBUG 1 00:13:20.532 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:20.532 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:13:20.532 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:20.532 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:20.532 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:20.532 #undef SPDK_CONFIG_DPDK_UADK 00:13:20.532 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:13:20.532 #define SPDK_CONFIG_EXAMPLES 1 00:13:20.532 #undef SPDK_CONFIG_FC 00:13:20.532 #define SPDK_CONFIG_FC_PATH 00:13:20.532 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:20.532 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:20.532 #define SPDK_CONFIG_FSDEV 1 00:13:20.532 #undef SPDK_CONFIG_FUSE 00:13:20.532 #undef SPDK_CONFIG_FUZZER 00:13:20.532 #define SPDK_CONFIG_FUZZER_LIB 00:13:20.532 #undef SPDK_CONFIG_GOLANG 00:13:20.532 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:20.532 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:20.532 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:20.532 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:20.532 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:20.532 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:20.532 #undef SPDK_CONFIG_HAVE_LZ4 00:13:20.532 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:20.532 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:20.532 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:20.532 #define SPDK_CONFIG_IDXD 1 00:13:20.532 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:20.532 #undef SPDK_CONFIG_IPSEC_MB 00:13:20.532 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:20.532 #define SPDK_CONFIG_ISAL 1 00:13:20.532 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:20.532 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:20.532 #define SPDK_CONFIG_LIBDIR 00:13:20.532 #undef SPDK_CONFIG_LTO 00:13:20.532 #define SPDK_CONFIG_MAX_LCORES 128 00:13:20.532 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:20.532 #define SPDK_CONFIG_NVME_CUSE 1 00:13:20.532 #undef SPDK_CONFIG_OCF 00:13:20.532 #define SPDK_CONFIG_OCF_PATH 00:13:20.532 #define SPDK_CONFIG_OPENSSL_PATH 00:13:20.532 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:20.532 #define SPDK_CONFIG_PGO_DIR 00:13:20.532 #undef SPDK_CONFIG_PGO_USE 00:13:20.532 #define SPDK_CONFIG_PREFIX /usr/local 00:13:20.532 #undef SPDK_CONFIG_RAID5F 00:13:20.532 #undef SPDK_CONFIG_RBD 00:13:20.532 #define SPDK_CONFIG_RDMA 1 00:13:20.532 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:20.532 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:20.532 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:20.532 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:20.532 #define SPDK_CONFIG_SHARED 1 00:13:20.532 #undef SPDK_CONFIG_SMA 00:13:20.532 #define SPDK_CONFIG_TESTS 1 00:13:20.532 #undef SPDK_CONFIG_TSAN 00:13:20.532 #define SPDK_CONFIG_UBLK 1 00:13:20.532 #define SPDK_CONFIG_UBSAN 1 00:13:20.532 #undef SPDK_CONFIG_UNIT_TESTS 00:13:20.532 #undef SPDK_CONFIG_URING 00:13:20.532 #define SPDK_CONFIG_URING_PATH 00:13:20.532 #undef SPDK_CONFIG_URING_ZNS 00:13:20.532 #undef SPDK_CONFIG_USDT 00:13:20.532 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:20.532 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:20.532 #undef SPDK_CONFIG_VFIO_USER 00:13:20.532 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:20.532 #define SPDK_CONFIG_VHOST 1 00:13:20.532 #define SPDK_CONFIG_VIRTIO 1 00:13:20.532 #undef SPDK_CONFIG_VTUNE 00:13:20.533 #define SPDK_CONFIG_VTUNE_DIR 00:13:20.533 #define SPDK_CONFIG_WERROR 1 00:13:20.533 #define SPDK_CONFIG_WPDK_DIR 00:13:20.533 #define SPDK_CONFIG_XNVME 1 00:13:20.533 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:20.533 17:47:47 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.533 17:47:47 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.533 17:47:47 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.533 17:47:47 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.533 17:47:47 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.533 17:47:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.533 17:47:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.533 17:47:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.533 17:47:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:20.533 17:47:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@68 -- # uname -s 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:13:20.533 17:47:47 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:13:20.533 17:47:47 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:13:20.534 17:47:47 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:20.796 17:47:47 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70000 ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70000 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.rm39EH 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.rm39EH/tests/xnvme /tmp/spdk.rm39EH 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976215552 00:13:20.797 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592326144 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261661696 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13976215552 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592326144 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266273792 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266425344 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=151552 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253269504 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253281792 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94057893888 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5644886016 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:13:20.798 * Looking for test storage... 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13976215552 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.798 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:20.798 17:47:47 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:20.798 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.798 --rc genhtml_branch_coverage=1 00:13:20.798 --rc genhtml_function_coverage=1 00:13:20.798 --rc genhtml_legend=1 00:13:20.798 --rc geninfo_all_blocks=1 00:13:20.798 --rc geninfo_unexecuted_blocks=1 00:13:20.798 00:13:20.798 ' 00:13:20.798 17:47:47 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.799 --rc genhtml_branch_coverage=1 00:13:20.799 --rc genhtml_function_coverage=1 00:13:20.799 --rc genhtml_legend=1 00:13:20.799 --rc geninfo_all_blocks=1 00:13:20.799 --rc geninfo_unexecuted_blocks=1 00:13:20.799 00:13:20.799 ' 00:13:20.799 17:47:47 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.799 --rc genhtml_branch_coverage=1 00:13:20.799 --rc genhtml_function_coverage=1 00:13:20.799 --rc genhtml_legend=1 00:13:20.799 --rc geninfo_all_blocks=1 00:13:20.799 --rc geninfo_unexecuted_blocks=1 00:13:20.799 00:13:20.799 ' 00:13:20.799 17:47:47 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:20.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:20.799 --rc genhtml_branch_coverage=1 00:13:20.799 --rc genhtml_function_coverage=1 00:13:20.799 --rc genhtml_legend=1 00:13:20.799 --rc geninfo_all_blocks=1 00:13:20.799 --rc geninfo_unexecuted_blocks=1 00:13:20.799 00:13:20.799 ' 00:13:20.799 17:47:47 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:20.799 17:47:47 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:20.799 17:47:47 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.799 17:47:47 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.799 17:47:47 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.799 17:47:47 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.799 17:47:47 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.799 17:47:47 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.799 17:47:47 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:20.799 17:47:47 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:13:20.799 17:47:47 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:21.368 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:21.627 Waiting for block devices as requested 00:13:21.627 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.886 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:21.886 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:22.145 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:27.418 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:27.418 17:47:54 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:13:27.418 17:47:54 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:13:27.418 17:47:54 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:13:27.678 17:47:54 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:13:27.678 17:47:54 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:13:27.678 17:47:54 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:13:27.678 17:47:54 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:13:27.678 17:47:54 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:13:27.938 No valid GPT data, bailing 00:13:27.938 17:47:54 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:13:27.938 17:47:54 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:13:27.938 17:47:54 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:27.938 17:47:54 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:27.938 17:47:54 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.938 17:47:54 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.938 17:47:54 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.938 ************************************ 00:13:27.938 START TEST xnvme_rpc 00:13:27.938 ************************************ 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70402 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70402 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70402 ']' 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.938 17:47:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.938 [2024-11-20 17:47:54.996434] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:27.938 [2024-11-20 17:47:54.996569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70402 ] 00:13:28.197 [2024-11-20 17:47:55.176127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.197 [2024-11-20 17:47:55.295571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.136 xnvme_bdev 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.136 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70402 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70402 ']' 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70402 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:29.395 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70402 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70402' 00:13:29.396 killing process with pid 70402 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70402 00:13:29.396 17:47:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70402 00:13:31.931 00:13:31.931 real 0m3.978s 00:13:31.931 user 0m4.038s 00:13:31.931 sys 0m0.521s 00:13:31.931 17:47:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.931 17:47:58 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:31.931 ************************************ 00:13:31.931 END TEST xnvme_rpc 00:13:31.931 ************************************ 00:13:31.931 17:47:58 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:31.931 17:47:58 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:31.931 17:47:58 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.931 17:47:58 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.931 ************************************ 00:13:31.931 START TEST xnvme_bdevperf 00:13:31.931 ************************************ 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:31.931 17:47:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:31.931 { 00:13:31.931 "subsystems": [ 00:13:31.931 { 00:13:31.931 "subsystem": "bdev", 00:13:31.931 "config": [ 00:13:31.931 { 00:13:31.931 "params": { 00:13:31.931 "io_mechanism": "libaio", 00:13:31.931 "conserve_cpu": false, 00:13:31.931 "filename": "/dev/nvme0n1", 00:13:31.931 "name": "xnvme_bdev" 00:13:31.931 }, 00:13:31.931 "method": "bdev_xnvme_create" 00:13:31.931 }, 00:13:31.931 { 00:13:31.931 "method": "bdev_wait_for_examine" 00:13:31.931 } 00:13:31.931 ] 00:13:31.931 } 00:13:31.931 ] 00:13:31.931 } 00:13:31.931 [2024-11-20 17:47:59.028811] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:31.931 [2024-11-20 17:47:59.028937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70482 ] 00:13:32.194 [2024-11-20 17:47:59.210338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.194 [2024-11-20 17:47:59.330689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.763 Running I/O for 5 seconds... 00:13:34.633 48055.00 IOPS, 187.71 MiB/s [2024-11-20T17:48:02.743Z] 46100.00 IOPS, 180.08 MiB/s [2024-11-20T17:48:04.120Z] 44500.67 IOPS, 173.83 MiB/s [2024-11-20T17:48:05.057Z] 44210.25 IOPS, 172.70 MiB/s [2024-11-20T17:48:05.057Z] 43771.20 IOPS, 170.98 MiB/s 00:13:37.881 Latency(us) 00:13:37.881 [2024-11-20T17:48:05.057Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:37.881 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:37.881 xnvme_bdev : 5.00 43751.23 170.90 0.00 0.00 1459.23 195.75 5632.41 00:13:37.881 [2024-11-20T17:48:05.057Z] =================================================================================================================== 00:13:37.881 [2024-11-20T17:48:05.057Z] Total : 43751.23 170.90 0.00 0.00 1459.23 195.75 5632.41 00:13:38.843 17:48:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:38.843 17:48:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:38.843 17:48:05 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:38.843 17:48:05 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:38.843 17:48:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:38.843 { 00:13:38.843 "subsystems": [ 00:13:38.843 { 00:13:38.843 "subsystem": "bdev", 00:13:38.843 "config": [ 00:13:38.843 { 00:13:38.843 "params": { 00:13:38.843 "io_mechanism": "libaio", 00:13:38.843 "conserve_cpu": false, 00:13:38.843 "filename": "/dev/nvme0n1", 00:13:38.843 "name": "xnvme_bdev" 00:13:38.843 }, 00:13:38.843 "method": "bdev_xnvme_create" 00:13:38.843 }, 00:13:38.843 { 00:13:38.843 "method": "bdev_wait_for_examine" 00:13:38.843 } 00:13:38.843 ] 00:13:38.843 } 00:13:38.843 ] 00:13:38.843 } 00:13:38.843 [2024-11-20 17:48:05.944431] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:13:38.843 [2024-11-20 17:48:05.944545] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70558 ] 00:13:39.102 [2024-11-20 17:48:06.130561] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.102 [2024-11-20 17:48:06.251063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.671 Running I/O for 5 seconds... 00:13:41.543 43909.00 IOPS, 171.52 MiB/s [2024-11-20T17:48:09.663Z] 42906.00 IOPS, 167.60 MiB/s [2024-11-20T17:48:11.035Z] 42706.00 IOPS, 166.82 MiB/s [2024-11-20T17:48:11.975Z] 42737.75 IOPS, 166.94 MiB/s [2024-11-20T17:48:11.975Z] 42660.20 IOPS, 166.64 MiB/s 00:13:44.799 Latency(us) 00:13:44.799 [2024-11-20T17:48:11.975Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:44.799 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:44.799 xnvme_bdev : 5.00 42648.49 166.60 0.00 0.00 1497.13 177.66 5921.93 00:13:44.799 [2024-11-20T17:48:11.975Z] =================================================================================================================== 00:13:44.799 [2024-11-20T17:48:11.975Z] Total : 42648.49 166.60 0.00 0.00 1497.13 177.66 5921.93 00:13:45.731 00:13:45.731 real 0m13.833s 00:13:45.731 user 0m5.111s 00:13:45.731 sys 0m5.810s 00:13:45.731 17:48:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.731 17:48:12 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:45.731 ************************************ 00:13:45.731 END TEST xnvme_bdevperf 00:13:45.731 ************************************ 00:13:45.731 17:48:12 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:45.731 17:48:12 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:45.731 17:48:12 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.731 17:48:12 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:45.731 ************************************ 00:13:45.731 START TEST xnvme_fio_plugin 00:13:45.731 ************************************ 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:45.731 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:45.732 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:45.732 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:45.732 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:45.732 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:45.732 17:48:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:45.732 { 00:13:45.732 "subsystems": [ 00:13:45.732 { 00:13:45.732 "subsystem": "bdev", 00:13:45.732 "config": [ 00:13:45.732 { 00:13:45.732 "params": { 00:13:45.732 "io_mechanism": "libaio", 00:13:45.732 "conserve_cpu": false, 00:13:45.732 "filename": "/dev/nvme0n1", 00:13:45.732 "name": "xnvme_bdev" 00:13:45.732 }, 00:13:45.732 "method": "bdev_xnvme_create" 00:13:45.732 }, 00:13:45.732 { 00:13:45.732 "method": "bdev_wait_for_examine" 00:13:45.732 } 00:13:45.732 ] 00:13:45.732 } 00:13:45.732 ] 00:13:45.732 } 00:13:45.991 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:45.991 fio-3.35 00:13:45.991 Starting 1 thread 00:13:52.550 00:13:52.550 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70688: Wed Nov 20 17:48:18 2024 00:13:52.550 read: IOPS=46.5k, BW=181MiB/s (190MB/s)(907MiB/5001msec) 00:13:52.550 slat (usec): min=4, max=869, avg=18.78, stdev=21.87 00:13:52.550 clat (usec): min=85, max=6910, avg=812.79, stdev=527.96 00:13:52.550 lat (usec): min=126, max=6991, avg=831.57, stdev=532.45 00:13:52.550 clat percentiles (usec): 00:13:52.550 | 1.00th=[ 172], 5.00th=[ 243], 10.00th=[ 306], 20.00th=[ 420], 00:13:52.550 | 30.00th=[ 529], 40.00th=[ 627], 50.00th=[ 734], 60.00th=[ 840], 00:13:52.550 | 70.00th=[ 947], 80.00th=[ 1074], 90.00th=[ 1287], 95.00th=[ 1663], 00:13:52.550 | 99.00th=[ 3032], 99.50th=[ 3621], 99.90th=[ 4490], 99.95th=[ 4752], 00:13:52.550 | 99.99th=[ 5211] 00:13:52.550 bw ( KiB/s): min=159600, max=200872, per=100.00%, avg=187304.67, stdev=12336.53, samples=9 00:13:52.550 iops : min=39900, max=50218, avg=46826.11, stdev=3084.17, samples=9 00:13:52.550 lat (usec) : 100=0.03%, 250=5.51%, 500=21.95%, 750=24.21%, 1000=23.11% 00:13:52.550 lat (msec) : 2=21.74%, 4=3.15%, 10=0.30% 00:13:52.550 cpu : usr=25.98%, sys=51.80%, ctx=205, majf=0, minf=764 00:13:52.550 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=11.0%, 16=26.3%, 32=55.9%, >=64=1.8% 00:13:52.550 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.550 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:13:52.550 issued rwts: total=232310,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.550 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:52.550 00:13:52.550 Run status group 0 (all jobs): 00:13:52.550 READ: bw=181MiB/s (190MB/s), 181MiB/s-181MiB/s (190MB/s-190MB/s), io=907MiB (952MB), run=5001-5001msec 00:13:53.491 ----------------------------------------------------- 00:13:53.491 Suppressions used: 00:13:53.491 count bytes template 00:13:53.491 1 11 /usr/src/fio/parse.c 00:13:53.491 1 8 libtcmalloc_minimal.so 00:13:53.491 1 904 libcrypto.so 00:13:53.491 ----------------------------------------------------- 00:13:53.491 00:13:53.491 17:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:53.492 17:48:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:53.492 { 00:13:53.492 "subsystems": [ 00:13:53.492 { 00:13:53.492 "subsystem": "bdev", 00:13:53.492 "config": [ 00:13:53.492 { 00:13:53.492 "params": { 00:13:53.492 "io_mechanism": "libaio", 00:13:53.492 "conserve_cpu": false, 00:13:53.492 "filename": "/dev/nvme0n1", 00:13:53.492 "name": "xnvme_bdev" 00:13:53.492 }, 00:13:53.492 "method": "bdev_xnvme_create" 00:13:53.492 }, 00:13:53.492 { 00:13:53.492 "method": "bdev_wait_for_examine" 00:13:53.492 } 00:13:53.492 ] 00:13:53.492 } 00:13:53.492 ] 00:13:53.492 } 00:13:53.492 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:53.492 fio-3.35 00:13:53.492 Starting 1 thread 00:14:00.060 00:14:00.060 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=70781: Wed Nov 20 17:48:26 2024 00:14:00.060 write: IOPS=48.0k, BW=187MiB/s (197MB/s)(937MiB/5001msec); 0 zone resets 00:14:00.060 slat (usec): min=4, max=496, avg=18.17, stdev=23.10 00:14:00.060 clat (usec): min=85, max=5494, avg=793.06, stdev=499.80 00:14:00.060 lat (usec): min=95, max=5589, avg=811.23, stdev=503.84 00:14:00.060 clat percentiles (usec): 00:14:00.060 | 1.00th=[ 178], 5.00th=[ 251], 10.00th=[ 314], 20.00th=[ 424], 00:14:00.060 | 30.00th=[ 523], 40.00th=[ 619], 50.00th=[ 717], 60.00th=[ 816], 00:14:00.060 | 70.00th=[ 922], 80.00th=[ 1037], 90.00th=[ 1237], 95.00th=[ 1549], 00:14:00.060 | 99.00th=[ 2966], 99.50th=[ 3556], 99.90th=[ 4359], 99.95th=[ 4621], 00:14:00.060 | 99.99th=[ 4948] 00:14:00.060 bw ( KiB/s): min=179152, max=207584, per=100.00%, avg=193230.22, stdev=10140.66, samples=9 00:14:00.060 iops : min=44788, max=51896, avg=48307.56, stdev=2535.17, samples=9 00:14:00.060 lat (usec) : 100=0.03%, 250=4.88%, 500=22.69%, 750=25.69%, 1000=23.66% 00:14:00.060 lat (msec) : 2=20.22%, 4=2.57%, 10=0.26% 00:14:00.060 cpu : usr=25.94%, sys=52.92%, ctx=83, majf=0, minf=765 00:14:00.060 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=10.5%, 16=26.2%, 32=56.8%, >=64=1.8% 00:14:00.060 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:00.060 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:00.060 issued rwts: total=0,239949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:00.060 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:00.060 00:14:00.060 Run status group 0 (all jobs): 00:14:00.060 WRITE: bw=187MiB/s (197MB/s), 187MiB/s-187MiB/s (197MB/s-197MB/s), io=937MiB (983MB), run=5001-5001msec 00:14:00.627 ----------------------------------------------------- 00:14:00.627 Suppressions used: 00:14:00.627 count bytes template 00:14:00.627 1 11 /usr/src/fio/parse.c 00:14:00.627 1 8 libtcmalloc_minimal.so 00:14:00.627 1 904 libcrypto.so 00:14:00.627 ----------------------------------------------------- 00:14:00.627 00:14:00.627 00:14:00.627 real 0m14.910s 00:14:00.627 user 0m6.422s 00:14:00.627 sys 0m5.979s 00:14:00.627 17:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.627 ************************************ 00:14:00.627 END TEST xnvme_fio_plugin 00:14:00.627 ************************************ 00:14:00.627 17:48:27 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:00.627 17:48:27 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:00.627 17:48:27 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:00.627 17:48:27 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:00.627 17:48:27 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:00.627 17:48:27 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.627 17:48:27 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.627 17:48:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.886 ************************************ 00:14:00.886 START TEST xnvme_rpc 00:14:00.886 ************************************ 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70875 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70875 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70875 ']' 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.886 17:48:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.886 [2024-11-20 17:48:27.918465] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:00.886 [2024-11-20 17:48:27.918596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70875 ] 00:14:01.145 [2024-11-20 17:48:28.103826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.145 [2024-11-20 17:48:28.215225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.080 xnvme_bdev 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.080 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70875 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70875 ']' 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70875 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70875 00:14:02.338 killing process with pid 70875 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70875' 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70875 00:14:02.338 17:48:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70875 00:14:04.876 00:14:04.876 real 0m3.947s 00:14:04.876 user 0m3.985s 00:14:04.876 sys 0m0.550s 00:14:04.876 17:48:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:04.876 ************************************ 00:14:04.876 END TEST xnvme_rpc 00:14:04.876 ************************************ 00:14:04.876 17:48:31 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:04.876 17:48:31 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:04.876 17:48:31 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:04.876 17:48:31 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.876 17:48:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.876 ************************************ 00:14:04.876 START TEST xnvme_bdevperf 00:14:04.876 ************************************ 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:04.876 17:48:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:04.876 { 00:14:04.876 "subsystems": [ 00:14:04.876 { 00:14:04.876 "subsystem": "bdev", 00:14:04.876 "config": [ 00:14:04.876 { 00:14:04.876 "params": { 00:14:04.876 "io_mechanism": "libaio", 00:14:04.876 "conserve_cpu": true, 00:14:04.876 "filename": "/dev/nvme0n1", 00:14:04.876 "name": "xnvme_bdev" 00:14:04.876 }, 00:14:04.876 "method": "bdev_xnvme_create" 00:14:04.876 }, 00:14:04.876 { 00:14:04.876 "method": "bdev_wait_for_examine" 00:14:04.876 } 00:14:04.876 ] 00:14:04.876 } 00:14:04.876 ] 00:14:04.876 } 00:14:04.876 [2024-11-20 17:48:31.925433] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:04.876 [2024-11-20 17:48:31.925541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70959 ] 00:14:05.135 [2024-11-20 17:48:32.107146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.135 [2024-11-20 17:48:32.217304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.701 Running I/O for 5 seconds... 00:14:07.570 43311.00 IOPS, 169.18 MiB/s [2024-11-20T17:48:35.683Z] 42678.50 IOPS, 166.71 MiB/s [2024-11-20T17:48:36.618Z] 42632.33 IOPS, 166.53 MiB/s [2024-11-20T17:48:37.995Z] 42580.75 IOPS, 166.33 MiB/s 00:14:10.820 Latency(us) 00:14:10.820 [2024-11-20T17:48:37.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.820 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:10.820 xnvme_bdev : 5.00 42532.97 166.14 0.00 0.00 1501.33 154.63 7685.35 00:14:10.820 [2024-11-20T17:48:37.996Z] =================================================================================================================== 00:14:10.820 [2024-11-20T17:48:37.996Z] Total : 42532.97 166.14 0.00 0.00 1501.33 154.63 7685.35 00:14:11.753 17:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:11.753 17:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:11.753 17:48:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:11.753 17:48:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:11.753 17:48:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:11.753 { 00:14:11.753 "subsystems": [ 00:14:11.753 { 00:14:11.753 "subsystem": "bdev", 00:14:11.753 "config": [ 00:14:11.753 { 00:14:11.753 "params": { 00:14:11.753 "io_mechanism": "libaio", 00:14:11.753 "conserve_cpu": true, 00:14:11.753 "filename": "/dev/nvme0n1", 00:14:11.753 "name": "xnvme_bdev" 00:14:11.753 }, 00:14:11.753 "method": "bdev_xnvme_create" 00:14:11.753 }, 00:14:11.753 { 00:14:11.753 "method": "bdev_wait_for_examine" 00:14:11.753 } 00:14:11.753 ] 00:14:11.753 } 00:14:11.753 ] 00:14:11.753 } 00:14:11.753 [2024-11-20 17:48:38.843719] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:11.753 [2024-11-20 17:48:38.843846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71035 ] 00:14:12.011 [2024-11-20 17:48:39.026616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.011 [2024-11-20 17:48:39.144467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.575 Running I/O for 5 seconds... 00:14:14.441 43848.00 IOPS, 171.28 MiB/s [2024-11-20T17:48:42.549Z] 43331.50 IOPS, 169.26 MiB/s [2024-11-20T17:48:43.923Z] 42606.33 IOPS, 166.43 MiB/s [2024-11-20T17:48:44.857Z] 42254.00 IOPS, 165.05 MiB/s [2024-11-20T17:48:44.857Z] 40855.00 IOPS, 159.59 MiB/s 00:14:17.681 Latency(us) 00:14:17.681 [2024-11-20T17:48:44.857Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.681 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:17.681 xnvme_bdev : 5.01 40823.18 159.47 0.00 0.00 1564.08 156.27 3763.71 00:14:17.681 [2024-11-20T17:48:44.857Z] =================================================================================================================== 00:14:17.681 [2024-11-20T17:48:44.857Z] Total : 40823.18 159.47 0.00 0.00 1564.08 156.27 3763.71 00:14:18.615 00:14:18.615 real 0m13.856s 00:14:18.615 user 0m4.962s 00:14:18.615 sys 0m5.843s 00:14:18.615 17:48:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:18.615 17:48:45 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:18.615 ************************************ 00:14:18.615 END TEST xnvme_bdevperf 00:14:18.615 ************************************ 00:14:18.615 17:48:45 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:18.615 17:48:45 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:18.615 17:48:45 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:18.615 17:48:45 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:18.615 ************************************ 00:14:18.615 START TEST xnvme_fio_plugin 00:14:18.615 ************************************ 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:18.615 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:18.874 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:18.874 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:18.874 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:18.874 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:18.874 17:48:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:18.874 { 00:14:18.874 "subsystems": [ 00:14:18.874 { 00:14:18.874 "subsystem": "bdev", 00:14:18.874 "config": [ 00:14:18.874 { 00:14:18.874 "params": { 00:14:18.874 "io_mechanism": "libaio", 00:14:18.874 "conserve_cpu": true, 00:14:18.874 "filename": "/dev/nvme0n1", 00:14:18.874 "name": "xnvme_bdev" 00:14:18.874 }, 00:14:18.874 "method": "bdev_xnvme_create" 00:14:18.874 }, 00:14:18.874 { 00:14:18.874 "method": "bdev_wait_for_examine" 00:14:18.874 } 00:14:18.874 ] 00:14:18.874 } 00:14:18.874 ] 00:14:18.874 } 00:14:18.874 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:18.874 fio-3.35 00:14:18.874 Starting 1 thread 00:14:25.439 00:14:25.439 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71160: Wed Nov 20 17:48:51 2024 00:14:25.439 read: IOPS=45.6k, BW=178MiB/s (187MB/s)(890MiB/5001msec) 00:14:25.439 slat (usec): min=4, max=367, avg=19.34, stdev=22.27 00:14:25.439 clat (usec): min=85, max=6403, avg=831.99, stdev=545.29 00:14:25.439 lat (usec): min=135, max=6491, avg=851.33, stdev=549.96 00:14:25.439 clat percentiles (usec): 00:14:25.439 | 1.00th=[ 172], 5.00th=[ 253], 10.00th=[ 326], 20.00th=[ 441], 00:14:25.439 | 30.00th=[ 537], 40.00th=[ 635], 50.00th=[ 725], 60.00th=[ 832], 00:14:25.439 | 70.00th=[ 947], 80.00th=[ 1090], 90.00th=[ 1369], 95.00th=[ 1778], 00:14:25.439 | 99.00th=[ 3130], 99.50th=[ 3785], 99.90th=[ 4686], 99.95th=[ 5014], 00:14:25.439 | 99.99th=[ 5473] 00:14:25.439 bw ( KiB/s): min=144520, max=217696, per=99.23%, avg=180895.11, stdev=22652.99, samples=9 00:14:25.439 iops : min=36130, max=54424, avg=45223.78, stdev=5663.25, samples=9 00:14:25.439 lat (usec) : 100=0.04%, 250=4.77%, 500=21.23%, 750=26.11%, 1000=21.91% 00:14:25.439 lat (msec) : 2=22.35%, 4=3.22%, 10=0.37% 00:14:25.439 cpu : usr=24.06%, sys=53.38%, ctx=96, majf=0, minf=764 00:14:25.439 IO depths : 1=0.1%, 2=1.1%, 4=3.8%, 8=10.1%, 16=25.6%, 32=57.5%, >=64=1.8% 00:14:25.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:25.439 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:25.439 issued rwts: total=227926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:25.439 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:25.439 00:14:25.439 Run status group 0 (all jobs): 00:14:25.439 READ: bw=178MiB/s (187MB/s), 178MiB/s-178MiB/s (187MB/s-187MB/s), io=890MiB (934MB), run=5001-5001msec 00:14:26.005 ----------------------------------------------------- 00:14:26.005 Suppressions used: 00:14:26.005 count bytes template 00:14:26.005 1 11 /usr/src/fio/parse.c 00:14:26.005 1 8 libtcmalloc_minimal.so 00:14:26.005 1 904 libcrypto.so 00:14:26.005 ----------------------------------------------------- 00:14:26.005 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:26.005 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:26.265 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:26.265 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:26.265 { 00:14:26.265 "subsystems": [ 00:14:26.265 { 00:14:26.265 "subsystem": "bdev", 00:14:26.265 "config": [ 00:14:26.265 { 00:14:26.265 "params": { 00:14:26.265 "io_mechanism": "libaio", 00:14:26.265 "conserve_cpu": true, 00:14:26.265 "filename": "/dev/nvme0n1", 00:14:26.265 "name": "xnvme_bdev" 00:14:26.265 }, 00:14:26.265 "method": "bdev_xnvme_create" 00:14:26.265 }, 00:14:26.265 { 00:14:26.265 "method": "bdev_wait_for_examine" 00:14:26.265 } 00:14:26.265 ] 00:14:26.265 } 00:14:26.265 ] 00:14:26.265 } 00:14:26.265 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:26.265 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:26.265 17:48:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.265 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.265 fio-3.35 00:14:26.265 Starting 1 thread 00:14:32.832 00:14:32.832 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71256: Wed Nov 20 17:48:59 2024 00:14:32.832 write: IOPS=47.5k, BW=185MiB/s (194MB/s)(928MiB/5001msec); 0 zone resets 00:14:32.832 slat (usec): min=4, max=449, avg=18.44, stdev=22.26 00:14:32.832 clat (usec): min=58, max=5749, avg=804.84, stdev=515.65 00:14:32.832 lat (usec): min=133, max=5844, avg=823.28, stdev=519.87 00:14:32.832 clat percentiles (usec): 00:14:32.832 | 1.00th=[ 176], 5.00th=[ 251], 10.00th=[ 318], 20.00th=[ 433], 00:14:32.832 | 30.00th=[ 529], 40.00th=[ 627], 50.00th=[ 725], 60.00th=[ 824], 00:14:32.832 | 70.00th=[ 930], 80.00th=[ 1057], 90.00th=[ 1254], 95.00th=[ 1598], 00:14:32.832 | 99.00th=[ 3032], 99.50th=[ 3654], 99.90th=[ 4490], 99.95th=[ 4686], 00:14:32.832 | 99.99th=[ 5014] 00:14:32.832 bw ( KiB/s): min=174336, max=205280, per=100.00%, avg=190539.11, stdev=10947.46, samples=9 00:14:32.832 iops : min=43584, max=51320, avg=47634.78, stdev=2736.86, samples=9 00:14:32.833 lat (usec) : 100=0.03%, 250=4.88%, 500=21.92%, 750=25.81%, 1000=23.38% 00:14:32.833 lat (msec) : 2=20.76%, 4=2.91%, 10=0.30% 00:14:32.833 cpu : usr=26.42%, sys=52.10%, ctx=80, majf=0, minf=765 00:14:32.833 IO depths : 1=0.1%, 2=1.0%, 4=3.7%, 8=10.4%, 16=26.0%, 32=57.0%, >=64=1.8% 00:14:32.833 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:32.833 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:14:32.833 issued rwts: total=0,237467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:32.833 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:32.833 00:14:32.833 Run status group 0 (all jobs): 00:14:32.833 WRITE: bw=185MiB/s (194MB/s), 185MiB/s-185MiB/s (194MB/s-194MB/s), io=928MiB (973MB), run=5001-5001msec 00:14:33.399 ----------------------------------------------------- 00:14:33.399 Suppressions used: 00:14:33.399 count bytes template 00:14:33.399 1 11 /usr/src/fio/parse.c 00:14:33.399 1 8 libtcmalloc_minimal.so 00:14:33.399 1 904 libcrypto.so 00:14:33.399 ----------------------------------------------------- 00:14:33.399 00:14:33.658 00:14:33.658 real 0m14.824s 00:14:33.658 user 0m6.250s 00:14:33.658 sys 0m6.044s 00:14:33.658 ************************************ 00:14:33.658 END TEST xnvme_fio_plugin 00:14:33.658 ************************************ 00:14:33.658 17:49:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.658 17:49:00 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:14:33.658 17:49:00 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:33.658 17:49:00 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:33.658 17:49:00 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.658 17:49:00 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:33.658 ************************************ 00:14:33.658 START TEST xnvme_rpc 00:14:33.658 ************************************ 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71338 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71338 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71338 ']' 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:33.658 17:49:00 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:33.658 [2024-11-20 17:49:00.779813] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:33.658 [2024-11-20 17:49:00.779984] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71338 ] 00:14:33.916 [2024-11-20 17:49:00.977317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.175 [2024-11-20 17:49:01.092224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.111 xnvme_bdev 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:49:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.111 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71338 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71338 ']' 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71338 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71338 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:35.112 killing process with pid 71338 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71338' 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71338 00:14:35.112 17:49:02 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71338 00:14:37.699 00:14:37.699 real 0m3.938s 00:14:37.699 user 0m3.992s 00:14:37.700 sys 0m0.541s 00:14:37.700 ************************************ 00:14:37.700 END TEST xnvme_rpc 00:14:37.700 ************************************ 00:14:37.700 17:49:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:37.700 17:49:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:37.700 17:49:04 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:37.700 17:49:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:37.700 17:49:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:37.700 17:49:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.700 ************************************ 00:14:37.700 START TEST xnvme_bdevperf 00:14:37.700 ************************************ 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:37.700 17:49:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:37.700 { 00:14:37.700 "subsystems": [ 00:14:37.700 { 00:14:37.700 "subsystem": "bdev", 00:14:37.700 "config": [ 00:14:37.700 { 00:14:37.700 "params": { 00:14:37.700 "io_mechanism": "io_uring", 00:14:37.700 "conserve_cpu": false, 00:14:37.700 "filename": "/dev/nvme0n1", 00:14:37.700 "name": "xnvme_bdev" 00:14:37.700 }, 00:14:37.700 "method": "bdev_xnvme_create" 00:14:37.700 }, 00:14:37.700 { 00:14:37.700 "method": "bdev_wait_for_examine" 00:14:37.700 } 00:14:37.700 ] 00:14:37.700 } 00:14:37.700 ] 00:14:37.700 } 00:14:37.700 [2024-11-20 17:49:04.757074] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:37.700 [2024-11-20 17:49:04.757240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71423 ] 00:14:37.959 [2024-11-20 17:49:04.935994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.959 [2024-11-20 17:49:05.049599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.528 Running I/O for 5 seconds... 00:14:40.398 40541.00 IOPS, 158.36 MiB/s [2024-11-20T17:49:08.512Z] 37584.50 IOPS, 146.81 MiB/s [2024-11-20T17:49:09.532Z] 40326.00 IOPS, 157.52 MiB/s [2024-11-20T17:49:10.468Z] 42738.00 IOPS, 166.95 MiB/s 00:14:43.292 Latency(us) 00:14:43.292 [2024-11-20T17:49:10.468Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:43.292 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:43.292 xnvme_bdev : 5.00 40213.83 157.09 0.00 0.00 1587.16 324.06 9633.00 00:14:43.292 [2024-11-20T17:49:10.468Z] =================================================================================================================== 00:14:43.292 [2024-11-20T17:49:10.468Z] Total : 40213.83 157.09 0.00 0.00 1587.16 324.06 9633.00 00:14:44.669 17:49:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:44.669 17:49:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:44.669 17:49:11 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:44.669 17:49:11 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:44.670 17:49:11 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:44.670 { 00:14:44.670 "subsystems": [ 00:14:44.670 { 00:14:44.670 "subsystem": "bdev", 00:14:44.670 "config": [ 00:14:44.670 { 00:14:44.670 "params": { 00:14:44.670 "io_mechanism": "io_uring", 00:14:44.670 "conserve_cpu": false, 00:14:44.670 "filename": "/dev/nvme0n1", 00:14:44.670 "name": "xnvme_bdev" 00:14:44.670 }, 00:14:44.670 "method": "bdev_xnvme_create" 00:14:44.670 }, 00:14:44.670 { 00:14:44.670 "method": "bdev_wait_for_examine" 00:14:44.670 } 00:14:44.670 ] 00:14:44.670 } 00:14:44.670 ] 00:14:44.670 } 00:14:44.670 [2024-11-20 17:49:11.638593] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:14:44.670 [2024-11-20 17:49:11.638739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71504 ] 00:14:44.670 [2024-11-20 17:49:11.826394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.929 [2024-11-20 17:49:11.947658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.187 Running I/O for 5 seconds... 00:14:47.141 25024.00 IOPS, 97.75 MiB/s [2024-11-20T17:49:15.694Z] 26506.50 IOPS, 103.54 MiB/s [2024-11-20T17:49:16.630Z] 28019.00 IOPS, 109.45 MiB/s [2024-11-20T17:49:17.568Z] 27999.75 IOPS, 109.37 MiB/s 00:14:50.392 Latency(us) 00:14:50.393 [2024-11-20T17:49:17.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.393 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:50.393 xnvme_bdev : 5.00 28552.88 111.53 0.00 0.00 2234.64 365.19 6606.24 00:14:50.393 [2024-11-20T17:49:17.569Z] =================================================================================================================== 00:14:50.393 [2024-11-20T17:49:17.569Z] Total : 28552.88 111.53 0.00 0.00 2234.64 365.19 6606.24 00:14:51.329 00:14:51.329 real 0m13.788s 00:14:51.329 user 0m6.561s 00:14:51.329 sys 0m7.022s 00:14:51.329 17:49:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:51.329 17:49:18 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:51.329 ************************************ 00:14:51.329 END TEST xnvme_bdevperf 00:14:51.329 ************************************ 00:14:51.590 17:49:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:51.590 17:49:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:51.590 17:49:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:51.590 17:49:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:51.590 ************************************ 00:14:51.590 START TEST xnvme_fio_plugin 00:14:51.590 ************************************ 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:51.590 17:49:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:51.590 { 00:14:51.590 "subsystems": [ 00:14:51.590 { 00:14:51.590 "subsystem": "bdev", 00:14:51.590 "config": [ 00:14:51.590 { 00:14:51.590 "params": { 00:14:51.590 "io_mechanism": "io_uring", 00:14:51.590 "conserve_cpu": false, 00:14:51.590 "filename": "/dev/nvme0n1", 00:14:51.590 "name": "xnvme_bdev" 00:14:51.590 }, 00:14:51.590 "method": "bdev_xnvme_create" 00:14:51.590 }, 00:14:51.590 { 00:14:51.590 "method": "bdev_wait_for_examine" 00:14:51.590 } 00:14:51.590 ] 00:14:51.590 } 00:14:51.590 ] 00:14:51.590 } 00:14:51.849 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:51.849 fio-3.35 00:14:51.849 Starting 1 thread 00:14:58.428 00:14:58.428 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71630: Wed Nov 20 17:49:24 2024 00:14:58.428 read: IOPS=26.3k, BW=103MiB/s (108MB/s)(515MiB/5001msec) 00:14:58.428 slat (usec): min=2, max=179, avg= 6.97, stdev= 2.91 00:14:58.428 clat (usec): min=287, max=3790, avg=2151.12, stdev=358.08 00:14:58.428 lat (usec): min=291, max=3797, avg=2158.10, stdev=359.39 00:14:58.428 clat percentiles (usec): 00:14:58.428 | 1.00th=[ 1369], 5.00th=[ 1565], 10.00th=[ 1680], 20.00th=[ 1827], 00:14:58.428 | 30.00th=[ 1942], 40.00th=[ 2057], 50.00th=[ 2147], 60.00th=[ 2245], 00:14:58.428 | 70.00th=[ 2343], 80.00th=[ 2474], 90.00th=[ 2606], 95.00th=[ 2737], 00:14:58.428 | 99.00th=[ 2900], 99.50th=[ 2999], 99.90th=[ 3294], 99.95th=[ 3458], 00:14:58.428 | 99.99th=[ 3720] 00:14:58.428 bw ( KiB/s): min=93184, max=123904, per=99.19%, avg=104504.89, stdev=12133.55, samples=9 00:14:58.428 iops : min=23296, max=30976, avg=26126.22, stdev=3033.39, samples=9 00:14:58.428 lat (usec) : 500=0.01% 00:14:58.428 lat (msec) : 2=34.88%, 4=65.11% 00:14:58.429 cpu : usr=36.46%, sys=62.30%, ctx=18, majf=0, minf=762 00:14:58.429 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=24.9%, 32=50.1%, >=64=1.6% 00:14:58.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:58.429 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:14:58.429 issued rwts: total=131721,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:58.429 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:58.429 00:14:58.429 Run status group 0 (all jobs): 00:14:58.429 READ: bw=103MiB/s (108MB/s), 103MiB/s-103MiB/s (108MB/s-108MB/s), io=515MiB (540MB), run=5001-5001msec 00:14:59.025 ----------------------------------------------------- 00:14:59.025 Suppressions used: 00:14:59.025 count bytes template 00:14:59.025 1 11 /usr/src/fio/parse.c 00:14:59.025 1 8 libtcmalloc_minimal.so 00:14:59.025 1 904 libcrypto.so 00:14:59.025 ----------------------------------------------------- 00:14:59.025 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:59.025 17:49:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:59.025 { 00:14:59.025 "subsystems": [ 00:14:59.025 { 00:14:59.025 "subsystem": "bdev", 00:14:59.025 "config": [ 00:14:59.025 { 00:14:59.025 "params": { 00:14:59.025 "io_mechanism": "io_uring", 00:14:59.025 "conserve_cpu": false, 00:14:59.025 "filename": "/dev/nvme0n1", 00:14:59.025 "name": "xnvme_bdev" 00:14:59.025 }, 00:14:59.025 "method": "bdev_xnvme_create" 00:14:59.025 }, 00:14:59.025 { 00:14:59.025 "method": "bdev_wait_for_examine" 00:14:59.025 } 00:14:59.025 ] 00:14:59.025 } 00:14:59.025 ] 00:14:59.025 } 00:14:59.025 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:59.025 fio-3.35 00:14:59.025 Starting 1 thread 00:15:05.593 00:15:05.593 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71727: Wed Nov 20 17:49:31 2024 00:15:05.593 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(605MiB/5002msec); 0 zone resets 00:15:05.593 slat (usec): min=2, max=135, avg= 5.67, stdev= 2.18 00:15:05.593 clat (usec): min=395, max=20855, avg=1844.15, stdev=474.47 00:15:05.593 lat (usec): min=398, max=20857, avg=1849.82, stdev=475.01 00:15:05.593 clat percentiles (usec): 00:15:05.593 | 1.00th=[ 1237], 5.00th=[ 1385], 10.00th=[ 1467], 20.00th=[ 1582], 00:15:05.593 | 30.00th=[ 1663], 40.00th=[ 1729], 50.00th=[ 1811], 60.00th=[ 1876], 00:15:05.593 | 70.00th=[ 1958], 80.00th=[ 2057], 90.00th=[ 2245], 95.00th=[ 2409], 00:15:05.593 | 99.00th=[ 2802], 99.50th=[ 2999], 99.90th=[ 3752], 99.95th=[ 4015], 00:15:05.593 | 99.99th=[19792] 00:15:05.593 bw ( KiB/s): min=105984, max=145920, per=99.12%, avg=122705.78, stdev=11520.83, samples=9 00:15:05.593 iops : min=26496, max=36480, avg=30676.44, stdev=2880.21, samples=9 00:15:05.593 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.04% 00:15:05.593 lat (msec) : 2=75.21%, 4=24.68%, 10=0.01%, 20=0.04%, 50=0.01% 00:15:05.593 cpu : usr=33.47%, sys=65.47%, ctx=15, majf=0, minf=763 00:15:05.593 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:05.593 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:05.593 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:05.593 issued rwts: total=0,154812,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:05.593 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:05.593 00:15:05.593 Run status group 0 (all jobs): 00:15:05.593 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=605MiB (634MB), run=5002-5002msec 00:15:06.532 ----------------------------------------------------- 00:15:06.532 Suppressions used: 00:15:06.532 count bytes template 00:15:06.532 1 11 /usr/src/fio/parse.c 00:15:06.532 1 8 libtcmalloc_minimal.so 00:15:06.532 1 904 libcrypto.so 00:15:06.532 ----------------------------------------------------- 00:15:06.532 00:15:06.532 00:15:06.532 real 0m14.852s 00:15:06.532 user 0m7.348s 00:15:06.532 sys 0m7.122s 00:15:06.532 17:49:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.532 17:49:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:06.532 ************************************ 00:15:06.532 END TEST xnvme_fio_plugin 00:15:06.533 ************************************ 00:15:06.533 17:49:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:06.533 17:49:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:15:06.533 17:49:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:15:06.533 17:49:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:06.533 17:49:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:06.533 17:49:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:06.533 17:49:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:06.533 ************************************ 00:15:06.533 START TEST xnvme_rpc 00:15:06.533 ************************************ 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71812 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71812 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71812 ']' 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:06.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:06.533 17:49:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.533 [2024-11-20 17:49:33.555949] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:06.533 [2024-11-20 17:49:33.556078] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71812 ] 00:15:06.792 [2024-11-20 17:49:33.736794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.792 [2024-11-20 17:49:33.854870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.730 xnvme_bdev 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.730 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71812 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71812 ']' 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71812 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71812 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:07.989 killing process with pid 71812 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71812' 00:15:07.989 17:49:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71812 00:15:07.989 17:49:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71812 00:15:10.525 00:15:10.525 real 0m3.963s 00:15:10.525 user 0m4.035s 00:15:10.525 sys 0m0.555s 00:15:10.525 17:49:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:10.525 17:49:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:10.525 ************************************ 00:15:10.525 END TEST xnvme_rpc 00:15:10.525 ************************************ 00:15:10.525 17:49:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:10.525 17:49:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:10.525 17:49:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:10.525 17:49:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:10.525 ************************************ 00:15:10.525 START TEST xnvme_bdevperf 00:15:10.525 ************************************ 00:15:10.525 17:49:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:10.526 17:49:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:10.526 { 00:15:10.526 "subsystems": [ 00:15:10.526 { 00:15:10.526 "subsystem": "bdev", 00:15:10.526 "config": [ 00:15:10.526 { 00:15:10.526 "params": { 00:15:10.526 "io_mechanism": "io_uring", 00:15:10.526 "conserve_cpu": true, 00:15:10.526 "filename": "/dev/nvme0n1", 00:15:10.526 "name": "xnvme_bdev" 00:15:10.526 }, 00:15:10.526 "method": "bdev_xnvme_create" 00:15:10.526 }, 00:15:10.526 { 00:15:10.526 "method": "bdev_wait_for_examine" 00:15:10.526 } 00:15:10.526 ] 00:15:10.526 } 00:15:10.526 ] 00:15:10.526 } 00:15:10.526 [2024-11-20 17:49:37.578024] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:10.526 [2024-11-20 17:49:37.578143] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71894 ] 00:15:10.785 [2024-11-20 17:49:37.756907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.785 [2024-11-20 17:49:37.875624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.391 Running I/O for 5 seconds... 00:15:13.262 53120.00 IOPS, 207.50 MiB/s [2024-11-20T17:49:41.375Z] 48384.00 IOPS, 189.00 MiB/s [2024-11-20T17:49:42.312Z] 47509.33 IOPS, 185.58 MiB/s [2024-11-20T17:49:43.249Z] 46272.00 IOPS, 180.75 MiB/s 00:15:16.073 Latency(us) 00:15:16.073 [2024-11-20T17:49:43.249Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.073 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:16.073 xnvme_bdev : 5.00 45085.46 176.12 0.00 0.00 1415.66 707.34 6553.60 00:15:16.073 [2024-11-20T17:49:43.249Z] =================================================================================================================== 00:15:16.073 [2024-11-20T17:49:43.249Z] Total : 45085.46 176.12 0.00 0.00 1415.66 707.34 6553.60 00:15:17.470 17:49:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:17.470 17:49:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:17.470 17:49:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:17.470 17:49:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:17.470 17:49:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:17.470 { 00:15:17.470 "subsystems": [ 00:15:17.471 { 00:15:17.471 "subsystem": "bdev", 00:15:17.471 "config": [ 00:15:17.471 { 00:15:17.471 "params": { 00:15:17.471 "io_mechanism": "io_uring", 00:15:17.471 "conserve_cpu": true, 00:15:17.471 "filename": "/dev/nvme0n1", 00:15:17.471 "name": "xnvme_bdev" 00:15:17.471 }, 00:15:17.471 "method": "bdev_xnvme_create" 00:15:17.471 }, 00:15:17.471 { 00:15:17.471 "method": "bdev_wait_for_examine" 00:15:17.471 } 00:15:17.471 ] 00:15:17.471 } 00:15:17.471 ] 00:15:17.471 } 00:15:17.471 [2024-11-20 17:49:44.459484] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:17.471 [2024-11-20 17:49:44.459608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71975 ] 00:15:17.471 [2024-11-20 17:49:44.639912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.730 [2024-11-20 17:49:44.749967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.990 Running I/O for 5 seconds... 00:15:20.303 34176.00 IOPS, 133.50 MiB/s [2024-11-20T17:49:48.437Z] 33280.00 IOPS, 130.00 MiB/s [2024-11-20T17:49:49.387Z] 33642.67 IOPS, 131.42 MiB/s [2024-11-20T17:49:50.320Z] 33440.00 IOPS, 130.62 MiB/s [2024-11-20T17:49:50.320Z] 33689.60 IOPS, 131.60 MiB/s 00:15:23.144 Latency(us) 00:15:23.144 [2024-11-20T17:49:50.321Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.145 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:23.145 xnvme_bdev : 5.01 33647.34 131.43 0.00 0.00 1896.54 825.78 7843.26 00:15:23.145 [2024-11-20T17:49:50.321Z] =================================================================================================================== 00:15:23.145 [2024-11-20T17:49:50.321Z] Total : 33647.34 131.43 0.00 0.00 1896.54 825.78 7843.26 00:15:24.079 00:15:24.079 real 0m13.733s 00:15:24.079 user 0m7.819s 00:15:24.079 sys 0m5.447s 00:15:24.079 17:49:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.079 17:49:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:24.079 ************************************ 00:15:24.079 END TEST xnvme_bdevperf 00:15:24.079 ************************************ 00:15:24.339 17:49:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:24.339 17:49:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:24.339 17:49:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.339 17:49:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 ************************************ 00:15:24.339 START TEST xnvme_fio_plugin 00:15:24.339 ************************************ 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:24.339 17:49:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:24.339 { 00:15:24.339 "subsystems": [ 00:15:24.339 { 00:15:24.339 "subsystem": "bdev", 00:15:24.339 "config": [ 00:15:24.339 { 00:15:24.339 "params": { 00:15:24.339 "io_mechanism": "io_uring", 00:15:24.339 "conserve_cpu": true, 00:15:24.339 "filename": "/dev/nvme0n1", 00:15:24.339 "name": "xnvme_bdev" 00:15:24.339 }, 00:15:24.339 "method": "bdev_xnvme_create" 00:15:24.339 }, 00:15:24.339 { 00:15:24.339 "method": "bdev_wait_for_examine" 00:15:24.339 } 00:15:24.339 ] 00:15:24.339 } 00:15:24.339 ] 00:15:24.339 } 00:15:24.598 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:24.598 fio-3.35 00:15:24.598 Starting 1 thread 00:15:31.165 00:15:31.165 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72094: Wed Nov 20 17:49:57 2024 00:15:31.165 read: IOPS=36.6k, BW=143MiB/s (150MB/s)(715MiB/5002msec) 00:15:31.165 slat (usec): min=2, max=122, avg= 4.57, stdev= 1.78 00:15:31.165 clat (usec): min=913, max=3416, avg=1569.46, stdev=261.41 00:15:31.165 lat (usec): min=916, max=3449, avg=1574.03, stdev=262.25 00:15:31.165 clat percentiles (usec): 00:15:31.165 | 1.00th=[ 1123], 5.00th=[ 1237], 10.00th=[ 1303], 20.00th=[ 1369], 00:15:31.165 | 30.00th=[ 1418], 40.00th=[ 1467], 50.00th=[ 1516], 60.00th=[ 1582], 00:15:31.165 | 70.00th=[ 1631], 80.00th=[ 1729], 90.00th=[ 1926], 95.00th=[ 2114], 00:15:31.165 | 99.00th=[ 2409], 99.50th=[ 2507], 99.90th=[ 2737], 99.95th=[ 2802], 00:15:31.165 | 99.99th=[ 3261] 00:15:31.165 bw ( KiB/s): min=130048, max=166912, per=99.80%, avg=145976.89, stdev=11832.75, samples=9 00:15:31.165 iops : min=32512, max=41728, avg=36494.22, stdev=2958.19, samples=9 00:15:31.165 lat (usec) : 1000=0.11% 00:15:31.165 lat (msec) : 2=92.26%, 4=7.63% 00:15:31.165 cpu : usr=48.07%, sys=48.43%, ctx=72, majf=0, minf=762 00:15:31.165 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:31.165 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:31.165 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:31.165 issued rwts: total=182912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:31.165 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:31.165 00:15:31.165 Run status group 0 (all jobs): 00:15:31.165 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=715MiB (749MB), run=5002-5002msec 00:15:31.733 ----------------------------------------------------- 00:15:31.734 Suppressions used: 00:15:31.734 count bytes template 00:15:31.734 1 11 /usr/src/fio/parse.c 00:15:31.734 1 8 libtcmalloc_minimal.so 00:15:31.734 1 904 libcrypto.so 00:15:31.734 ----------------------------------------------------- 00:15:31.734 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:31.734 17:49:58 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:31.734 { 00:15:31.734 "subsystems": [ 00:15:31.734 { 00:15:31.734 "subsystem": "bdev", 00:15:31.734 "config": [ 00:15:31.734 { 00:15:31.734 "params": { 00:15:31.734 "io_mechanism": "io_uring", 00:15:31.734 "conserve_cpu": true, 00:15:31.734 "filename": "/dev/nvme0n1", 00:15:31.734 "name": "xnvme_bdev" 00:15:31.734 }, 00:15:31.734 "method": "bdev_xnvme_create" 00:15:31.734 }, 00:15:31.734 { 00:15:31.734 "method": "bdev_wait_for_examine" 00:15:31.734 } 00:15:31.734 ] 00:15:31.734 } 00:15:31.734 ] 00:15:31.734 } 00:15:31.993 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:31.993 fio-3.35 00:15:31.993 Starting 1 thread 00:15:38.591 00:15:38.591 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72197: Wed Nov 20 17:50:04 2024 00:15:38.591 write: IOPS=31.5k, BW=123MiB/s (129MB/s)(615MiB/5001msec); 0 zone resets 00:15:38.591 slat (usec): min=2, max=817, avg= 5.52, stdev= 3.54 00:15:38.591 clat (usec): min=150, max=147320, avg=1817.49, stdev=2942.81 00:15:38.591 lat (usec): min=155, max=147327, avg=1823.00, stdev=2942.98 00:15:38.591 clat percentiles (usec): 00:15:38.591 | 1.00th=[ 1037], 5.00th=[ 1188], 10.00th=[ 1287], 20.00th=[ 1418], 00:15:38.591 | 30.00th=[ 1516], 40.00th=[ 1598], 50.00th=[ 1696], 60.00th=[ 1811], 00:15:38.591 | 70.00th=[ 1926], 80.00th=[ 2073], 90.00th=[ 2278], 95.00th=[ 2442], 00:15:38.591 | 99.00th=[ 2769], 99.50th=[ 3032], 99.90th=[ 8979], 99.95th=[ 12518], 00:15:38.591 | 99.99th=[145753] 00:15:38.591 bw ( KiB/s): min=85800, max=152576, per=99.59%, avg=125395.89, stdev=19649.23, samples=9 00:15:38.591 iops : min=21450, max=38144, avg=31348.89, stdev=4912.28, samples=9 00:15:38.591 lat (usec) : 250=0.01%, 500=0.03%, 750=0.06%, 1000=0.44% 00:15:38.591 lat (msec) : 2=74.66%, 4=24.53%, 10=0.19%, 20=0.04%, 250=0.04% 00:15:38.591 cpu : usr=49.16%, sys=46.94%, ctx=24, majf=0, minf=763 00:15:38.591 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.4%, 16=24.9%, 32=50.2%, >=64=1.6% 00:15:38.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:38.591 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:38.591 issued rwts: total=0,157422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:38.591 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:38.591 00:15:38.591 Run status group 0 (all jobs): 00:15:38.591 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=615MiB (645MB), run=5001-5001msec 00:15:39.161 ----------------------------------------------------- 00:15:39.161 Suppressions used: 00:15:39.161 count bytes template 00:15:39.161 1 11 /usr/src/fio/parse.c 00:15:39.161 1 8 libtcmalloc_minimal.so 00:15:39.161 1 904 libcrypto.so 00:15:39.161 ----------------------------------------------------- 00:15:39.161 00:15:39.161 00:15:39.161 real 0m14.843s 00:15:39.161 user 0m8.715s 00:15:39.161 sys 0m5.500s 00:15:39.161 17:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.161 ************************************ 00:15:39.161 END TEST xnvme_fio_plugin 00:15:39.161 ************************************ 00:15:39.161 17:50:06 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:15:39.161 17:50:06 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:15:39.161 17:50:06 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:39.161 17:50:06 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.161 17:50:06 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:39.161 ************************************ 00:15:39.161 START TEST xnvme_rpc 00:15:39.161 ************************************ 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72282 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72282 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72282 ']' 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:39.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:39.161 17:50:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:39.161 [2024-11-20 17:50:06.295972] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:39.161 [2024-11-20 17:50:06.296260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:15:39.420 [2024-11-20 17:50:06.461291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.420 [2024-11-20 17:50:06.584687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.358 xnvme_bdev 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.358 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72282 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72282 ']' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72282 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72282 00:15:40.631 killing process with pid 72282 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72282' 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72282 00:15:40.631 17:50:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72282 00:15:43.186 00:15:43.186 real 0m3.949s 00:15:43.186 user 0m4.121s 00:15:43.186 sys 0m0.525s 00:15:43.186 ************************************ 00:15:43.186 END TEST xnvme_rpc 00:15:43.186 ************************************ 00:15:43.186 17:50:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:43.186 17:50:10 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:43.186 17:50:10 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:15:43.186 17:50:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:43.186 17:50:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:43.186 17:50:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:43.186 ************************************ 00:15:43.186 START TEST xnvme_bdevperf 00:15:43.186 ************************************ 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:43.186 17:50:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:43.186 { 00:15:43.186 "subsystems": [ 00:15:43.186 { 00:15:43.186 "subsystem": "bdev", 00:15:43.186 "config": [ 00:15:43.186 { 00:15:43.186 "params": { 00:15:43.186 "io_mechanism": "io_uring_cmd", 00:15:43.186 "conserve_cpu": false, 00:15:43.186 "filename": "/dev/ng0n1", 00:15:43.186 "name": "xnvme_bdev" 00:15:43.186 }, 00:15:43.186 "method": "bdev_xnvme_create" 00:15:43.186 }, 00:15:43.186 { 00:15:43.186 "method": "bdev_wait_for_examine" 00:15:43.186 } 00:15:43.186 ] 00:15:43.186 } 00:15:43.186 ] 00:15:43.186 } 00:15:43.186 [2024-11-20 17:50:10.312331] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:43.186 [2024-11-20 17:50:10.312454] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72363 ] 00:15:43.447 [2024-11-20 17:50:10.493908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.447 [2024-11-20 17:50:10.609120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:44.015 Running I/O for 5 seconds... 00:15:45.886 40128.00 IOPS, 156.75 MiB/s [2024-11-20T17:50:13.997Z] 38016.00 IOPS, 148.50 MiB/s [2024-11-20T17:50:15.384Z] 38016.00 IOPS, 148.50 MiB/s [2024-11-20T17:50:16.327Z] 37088.00 IOPS, 144.88 MiB/s 00:15:49.151 Latency(us) 00:15:49.151 [2024-11-20T17:50:16.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.151 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:15:49.151 xnvme_bdev : 5.00 36091.90 140.98 0.00 0.00 1768.00 881.71 5579.77 00:15:49.151 [2024-11-20T17:50:16.327Z] =================================================================================================================== 00:15:49.151 [2024-11-20T17:50:16.327Z] Total : 36091.90 140.98 0.00 0.00 1768.00 881.71 5579.77 00:15:50.087 17:50:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:50.087 17:50:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:50.087 17:50:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:15:50.087 17:50:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:50.087 17:50:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:50.087 { 00:15:50.087 "subsystems": [ 00:15:50.087 { 00:15:50.087 "subsystem": "bdev", 00:15:50.087 "config": [ 00:15:50.087 { 00:15:50.087 "params": { 00:15:50.087 "io_mechanism": "io_uring_cmd", 00:15:50.087 "conserve_cpu": false, 00:15:50.087 "filename": "/dev/ng0n1", 00:15:50.087 "name": "xnvme_bdev" 00:15:50.087 }, 00:15:50.087 "method": "bdev_xnvme_create" 00:15:50.087 }, 00:15:50.087 { 00:15:50.087 "method": "bdev_wait_for_examine" 00:15:50.087 } 00:15:50.087 ] 00:15:50.087 } 00:15:50.088 ] 00:15:50.088 } 00:15:50.088 [2024-11-20 17:50:17.201548] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:50.088 [2024-11-20 17:50:17.201697] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72443 ] 00:15:50.346 [2024-11-20 17:50:17.388947] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.605 [2024-11-20 17:50:17.522982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.864 Running I/O for 5 seconds... 00:15:52.739 35264.00 IOPS, 137.75 MiB/s [2024-11-20T17:50:21.290Z] 33408.00 IOPS, 130.50 MiB/s [2024-11-20T17:50:22.252Z] 33365.33 IOPS, 130.33 MiB/s [2024-11-20T17:50:23.191Z] 32592.00 IOPS, 127.31 MiB/s [2024-11-20T17:50:23.191Z] 32640.00 IOPS, 127.50 MiB/s 00:15:56.015 Latency(us) 00:15:56.015 [2024-11-20T17:50:23.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.015 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:15:56.015 xnvme_bdev : 5.00 32629.90 127.46 0.00 0.00 1955.42 746.82 5053.38 00:15:56.015 [2024-11-20T17:50:23.191Z] =================================================================================================================== 00:15:56.015 [2024-11-20T17:50:23.191Z] Total : 32629.90 127.46 0.00 0.00 1955.42 746.82 5053.38 00:15:56.952 17:50:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:56.952 17:50:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:15:56.952 17:50:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:56.952 17:50:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:56.952 17:50:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:56.952 { 00:15:56.952 "subsystems": [ 00:15:56.952 { 00:15:56.952 "subsystem": "bdev", 00:15:56.952 "config": [ 00:15:56.952 { 00:15:56.952 "params": { 00:15:56.952 "io_mechanism": "io_uring_cmd", 00:15:56.952 "conserve_cpu": false, 00:15:56.952 "filename": "/dev/ng0n1", 00:15:56.952 "name": "xnvme_bdev" 00:15:56.952 }, 00:15:56.952 "method": "bdev_xnvme_create" 00:15:56.952 }, 00:15:56.952 { 00:15:56.952 "method": "bdev_wait_for_examine" 00:15:56.952 } 00:15:56.952 ] 00:15:56.952 } 00:15:56.952 ] 00:15:56.952 } 00:15:57.212 [2024-11-20 17:50:24.128148] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:15:57.212 [2024-11-20 17:50:24.128441] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72519 ] 00:15:57.212 [2024-11-20 17:50:24.309443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.471 [2024-11-20 17:50:24.428969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.731 Running I/O for 5 seconds... 00:16:00.041 70976.00 IOPS, 277.25 MiB/s [2024-11-20T17:50:28.154Z] 70880.00 IOPS, 276.88 MiB/s [2024-11-20T17:50:29.091Z] 71232.00 IOPS, 278.25 MiB/s [2024-11-20T17:50:30.046Z] 71248.00 IOPS, 278.31 MiB/s [2024-11-20T17:50:30.046Z] 71244.80 IOPS, 278.30 MiB/s 00:16:02.870 Latency(us) 00:16:02.870 [2024-11-20T17:50:30.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.870 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:02.870 xnvme_bdev : 5.00 71228.57 278.24 0.00 0.00 895.81 539.55 3842.67 00:16:02.870 [2024-11-20T17:50:30.046Z] =================================================================================================================== 00:16:02.870 [2024-11-20T17:50:30.047Z] Total : 71228.57 278.24 0.00 0.00 895.81 539.55 3842.67 00:16:03.806 17:50:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:03.806 17:50:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:03.806 17:50:30 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:03.806 17:50:30 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:03.806 17:50:30 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:03.806 { 00:16:03.806 "subsystems": [ 00:16:03.806 { 00:16:03.806 "subsystem": "bdev", 00:16:03.806 "config": [ 00:16:03.806 { 00:16:03.806 "params": { 00:16:03.806 "io_mechanism": "io_uring_cmd", 00:16:03.806 "conserve_cpu": false, 00:16:03.806 "filename": "/dev/ng0n1", 00:16:03.806 "name": "xnvme_bdev" 00:16:03.806 }, 00:16:03.806 "method": "bdev_xnvme_create" 00:16:03.806 }, 00:16:03.806 { 00:16:03.806 "method": "bdev_wait_for_examine" 00:16:03.806 } 00:16:03.807 ] 00:16:03.807 } 00:16:03.807 ] 00:16:03.807 } 00:16:04.066 [2024-11-20 17:50:31.002560] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:04.066 [2024-11-20 17:50:31.002682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72604 ] 00:16:04.066 [2024-11-20 17:50:31.182668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.325 [2024-11-20 17:50:31.297802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.582 Running I/O for 5 seconds... 00:16:06.494 53649.00 IOPS, 209.57 MiB/s [2024-11-20T17:50:35.046Z] 53392.50 IOPS, 208.56 MiB/s [2024-11-20T17:50:35.982Z] 44640.67 IOPS, 174.38 MiB/s [2024-11-20T17:50:36.916Z] 42798.00 IOPS, 167.18 MiB/s [2024-11-20T17:50:36.916Z] 44900.80 IOPS, 175.39 MiB/s 00:16:09.740 Latency(us) 00:16:09.740 [2024-11-20T17:50:36.916Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.740 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:09.740 xnvme_bdev : 5.01 44860.46 175.24 0.00 0.00 1423.10 67.03 30951.94 00:16:09.740 [2024-11-20T17:50:36.916Z] =================================================================================================================== 00:16:09.740 [2024-11-20T17:50:36.916Z] Total : 44860.46 175.24 0.00 0.00 1423.10 67.03 30951.94 00:16:10.677 ************************************ 00:16:10.677 END TEST xnvme_bdevperf 00:16:10.677 ************************************ 00:16:10.677 00:16:10.677 real 0m27.570s 00:16:10.677 user 0m13.816s 00:16:10.677 sys 0m13.354s 00:16:10.677 17:50:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:10.677 17:50:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:10.677 17:50:37 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:10.677 17:50:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:10.677 17:50:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:10.677 17:50:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 ************************************ 00:16:10.964 START TEST xnvme_fio_plugin 00:16:10.964 ************************************ 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:10.964 17:50:37 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:10.964 { 00:16:10.964 "subsystems": [ 00:16:10.964 { 00:16:10.964 "subsystem": "bdev", 00:16:10.964 "config": [ 00:16:10.964 { 00:16:10.964 "params": { 00:16:10.964 "io_mechanism": "io_uring_cmd", 00:16:10.964 "conserve_cpu": false, 00:16:10.964 "filename": "/dev/ng0n1", 00:16:10.964 "name": "xnvme_bdev" 00:16:10.964 }, 00:16:10.964 "method": "bdev_xnvme_create" 00:16:10.964 }, 00:16:10.964 { 00:16:10.964 "method": "bdev_wait_for_examine" 00:16:10.964 } 00:16:10.964 ] 00:16:10.964 } 00:16:10.964 ] 00:16:10.964 } 00:16:10.964 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:10.964 fio-3.35 00:16:10.964 Starting 1 thread 00:16:17.564 00:16:17.564 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72730: Wed Nov 20 17:50:43 2024 00:16:17.564 read: IOPS=30.1k, BW=118MiB/s (123MB/s)(589MiB/5001msec) 00:16:17.564 slat (usec): min=2, max=147, avg= 5.99, stdev= 2.29 00:16:17.564 clat (usec): min=990, max=3421, avg=1887.11, stdev=312.18 00:16:17.564 lat (usec): min=993, max=3431, avg=1893.09, stdev=313.40 00:16:17.564 clat percentiles (usec): 00:16:17.564 | 1.00th=[ 1156], 5.00th=[ 1287], 10.00th=[ 1467], 20.00th=[ 1663], 00:16:17.564 | 30.00th=[ 1762], 40.00th=[ 1827], 50.00th=[ 1893], 60.00th=[ 1958], 00:16:17.564 | 70.00th=[ 2024], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2409], 00:16:17.564 | 99.00th=[ 2606], 99.50th=[ 2704], 99.90th=[ 3032], 99.95th=[ 3163], 00:16:17.564 | 99.99th=[ 3359] 00:16:17.564 bw ( KiB/s): min=102912, max=131584, per=100.00%, avg=120945.78, stdev=8326.01, samples=9 00:16:17.564 iops : min=25728, max=32896, avg=30236.44, stdev=2081.50, samples=9 00:16:17.564 lat (usec) : 1000=0.01% 00:16:17.564 lat (msec) : 2=66.19%, 4=33.81% 00:16:17.564 cpu : usr=34.60%, sys=64.38%, ctx=9, majf=0, minf=762 00:16:17.564 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.564 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:17.564 issued rwts: total=150656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.564 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:17.564 00:16:17.564 Run status group 0 (all jobs): 00:16:17.564 READ: bw=118MiB/s (123MB/s), 118MiB/s-118MiB/s (123MB/s-123MB/s), io=589MiB (617MB), run=5001-5001msec 00:16:18.131 ----------------------------------------------------- 00:16:18.131 Suppressions used: 00:16:18.131 count bytes template 00:16:18.131 1 11 /usr/src/fio/parse.c 00:16:18.131 1 8 libtcmalloc_minimal.so 00:16:18.131 1 904 libcrypto.so 00:16:18.131 ----------------------------------------------------- 00:16:18.131 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:18.132 17:50:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:18.132 { 00:16:18.132 "subsystems": [ 00:16:18.132 { 00:16:18.132 "subsystem": "bdev", 00:16:18.132 "config": [ 00:16:18.132 { 00:16:18.132 "params": { 00:16:18.132 "io_mechanism": "io_uring_cmd", 00:16:18.132 "conserve_cpu": false, 00:16:18.132 "filename": "/dev/ng0n1", 00:16:18.132 "name": "xnvme_bdev" 00:16:18.132 }, 00:16:18.132 "method": "bdev_xnvme_create" 00:16:18.132 }, 00:16:18.132 { 00:16:18.132 "method": "bdev_wait_for_examine" 00:16:18.132 } 00:16:18.132 ] 00:16:18.132 } 00:16:18.132 ] 00:16:18.132 } 00:16:18.390 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:18.390 fio-3.35 00:16:18.390 Starting 1 thread 00:16:24.955 00:16:24.955 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72828: Wed Nov 20 17:50:51 2024 00:16:24.955 write: IOPS=30.9k, BW=121MiB/s (127MB/s)(604MiB/5002msec); 0 zone resets 00:16:24.955 slat (usec): min=2, max=139, avg= 6.10, stdev= 2.31 00:16:24.955 clat (usec): min=766, max=3410, avg=1831.09, stdev=324.05 00:16:24.955 lat (usec): min=769, max=3426, avg=1837.19, stdev=325.10 00:16:24.955 clat percentiles (usec): 00:16:24.955 | 1.00th=[ 996], 5.00th=[ 1188], 10.00th=[ 1418], 20.00th=[ 1631], 00:16:24.955 | 30.00th=[ 1713], 40.00th=[ 1778], 50.00th=[ 1844], 60.00th=[ 1909], 00:16:24.955 | 70.00th=[ 1975], 80.00th=[ 2057], 90.00th=[ 2212], 95.00th=[ 2343], 00:16:24.955 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3097], 99.95th=[ 3228], 00:16:24.955 | 99.99th=[ 3326] 00:16:24.955 bw ( KiB/s): min=119296, max=132343, per=100.00%, avg=124102.11, stdev=4497.60, samples=9 00:16:24.955 iops : min=29824, max=33085, avg=31025.44, stdev=1124.23, samples=9 00:16:24.955 lat (usec) : 1000=1.07% 00:16:24.955 lat (msec) : 2=72.41%, 4=26.53% 00:16:24.955 cpu : usr=34.91%, sys=64.03%, ctx=12, majf=0, minf=763 00:16:24.955 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:16:24.955 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.955 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:16:24.955 issued rwts: total=0,154496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.955 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:24.955 00:16:24.955 Run status group 0 (all jobs): 00:16:24.955 WRITE: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=604MiB (633MB), run=5002-5002msec 00:16:25.523 ----------------------------------------------------- 00:16:25.523 Suppressions used: 00:16:25.523 count bytes template 00:16:25.523 1 11 /usr/src/fio/parse.c 00:16:25.523 1 8 libtcmalloc_minimal.so 00:16:25.523 1 904 libcrypto.so 00:16:25.523 ----------------------------------------------------- 00:16:25.523 00:16:25.523 00:16:25.523 real 0m14.768s 00:16:25.523 user 0m7.261s 00:16:25.523 sys 0m7.154s 00:16:25.523 17:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.523 ************************************ 00:16:25.523 17:50:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:25.523 END TEST xnvme_fio_plugin 00:16:25.523 ************************************ 00:16:25.523 17:50:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:16:25.523 17:50:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:16:25.523 17:50:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:16:25.523 17:50:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:16:25.523 17:50:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:25.523 17:50:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.523 17:50:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:25.523 ************************************ 00:16:25.523 START TEST xnvme_rpc 00:16:25.523 ************************************ 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72912 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72912 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72912 ']' 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.783 17:50:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:25.783 [2024-11-20 17:50:52.806915] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:25.783 [2024-11-20 17:50:52.807032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72912 ] 00:16:26.042 [2024-11-20 17:50:52.989046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.042 [2024-11-20 17:50:53.104360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 xnvme_bdev 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:16:26.979 17:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.979 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72912 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72912 ']' 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72912 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72912 00:16:27.238 killing process with pid 72912 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72912' 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72912 00:16:27.238 17:50:54 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72912 00:16:29.828 ************************************ 00:16:29.828 END TEST xnvme_rpc 00:16:29.828 ************************************ 00:16:29.828 00:16:29.828 real 0m3.906s 00:16:29.828 user 0m3.943s 00:16:29.828 sys 0m0.576s 00:16:29.828 17:50:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.828 17:50:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 17:50:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:16:29.828 17:50:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:29.828 17:50:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.828 17:50:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 ************************************ 00:16:29.828 START TEST xnvme_bdevperf 00:16:29.828 ************************************ 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:29.828 17:50:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:29.828 { 00:16:29.828 "subsystems": [ 00:16:29.828 { 00:16:29.828 "subsystem": "bdev", 00:16:29.828 "config": [ 00:16:29.828 { 00:16:29.828 "params": { 00:16:29.828 "io_mechanism": "io_uring_cmd", 00:16:29.828 "conserve_cpu": true, 00:16:29.828 "filename": "/dev/ng0n1", 00:16:29.828 "name": "xnvme_bdev" 00:16:29.828 }, 00:16:29.828 "method": "bdev_xnvme_create" 00:16:29.828 }, 00:16:29.828 { 00:16:29.828 "method": "bdev_wait_for_examine" 00:16:29.828 } 00:16:29.828 ] 00:16:29.828 } 00:16:29.828 ] 00:16:29.829 } 00:16:29.829 [2024-11-20 17:50:56.769526] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:29.829 [2024-11-20 17:50:56.769646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72993 ] 00:16:29.829 [2024-11-20 17:50:56.951086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:30.088 [2024-11-20 17:50:57.069145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.348 Running I/O for 5 seconds... 00:16:32.663 29120.00 IOPS, 113.75 MiB/s [2024-11-20T17:51:00.776Z] 28928.00 IOPS, 113.00 MiB/s [2024-11-20T17:51:01.713Z] 28778.67 IOPS, 112.42 MiB/s [2024-11-20T17:51:02.653Z] 29024.00 IOPS, 113.38 MiB/s [2024-11-20T17:51:02.653Z] 29120.00 IOPS, 113.75 MiB/s 00:16:35.477 Latency(us) 00:16:35.477 [2024-11-20T17:51:02.653Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.477 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:16:35.477 xnvme_bdev : 5.01 29069.99 113.55 0.00 0.00 2194.94 1190.97 8317.02 00:16:35.477 [2024-11-20T17:51:02.653Z] =================================================================================================================== 00:16:35.477 [2024-11-20T17:51:02.653Z] Total : 29069.99 113.55 0.00 0.00 2194.94 1190.97 8317.02 00:16:36.414 17:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:36.414 17:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:36.414 17:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:16:36.414 17:51:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:36.414 17:51:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:36.673 { 00:16:36.673 "subsystems": [ 00:16:36.673 { 00:16:36.673 "subsystem": "bdev", 00:16:36.673 "config": [ 00:16:36.673 { 00:16:36.673 "params": { 00:16:36.673 "io_mechanism": "io_uring_cmd", 00:16:36.673 "conserve_cpu": true, 00:16:36.673 "filename": "/dev/ng0n1", 00:16:36.673 "name": "xnvme_bdev" 00:16:36.673 }, 00:16:36.673 "method": "bdev_xnvme_create" 00:16:36.673 }, 00:16:36.673 { 00:16:36.673 "method": "bdev_wait_for_examine" 00:16:36.673 } 00:16:36.673 ] 00:16:36.673 } 00:16:36.673 ] 00:16:36.673 } 00:16:36.673 [2024-11-20 17:51:03.676521] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:36.673 [2024-11-20 17:51:03.676688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73075 ] 00:16:36.933 [2024-11-20 17:51:03.872092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.933 [2024-11-20 17:51:04.021087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.502 Running I/O for 5 seconds... 00:16:39.377 30269.00 IOPS, 118.24 MiB/s [2024-11-20T17:51:07.490Z] 30051.00 IOPS, 117.39 MiB/s [2024-11-20T17:51:08.436Z] 30055.67 IOPS, 117.40 MiB/s [2024-11-20T17:51:09.810Z] 29165.75 IOPS, 113.93 MiB/s 00:16:42.634 Latency(us) 00:16:42.634 [2024-11-20T17:51:09.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.634 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:16:42.634 xnvme_bdev : 5.00 29140.92 113.83 0.00 0.00 2189.00 74.85 13054.56 00:16:42.634 [2024-11-20T17:51:09.810Z] =================================================================================================================== 00:16:42.634 [2024-11-20T17:51:09.810Z] Total : 29140.92 113.83 0.00 0.00 2189.00 74.85 13054.56 00:16:43.571 17:51:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:43.571 17:51:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:16:43.571 17:51:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:43.571 17:51:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:43.571 17:51:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:43.571 { 00:16:43.571 "subsystems": [ 00:16:43.571 { 00:16:43.571 "subsystem": "bdev", 00:16:43.571 "config": [ 00:16:43.571 { 00:16:43.571 "params": { 00:16:43.571 "io_mechanism": "io_uring_cmd", 00:16:43.571 "conserve_cpu": true, 00:16:43.571 "filename": "/dev/ng0n1", 00:16:43.571 "name": "xnvme_bdev" 00:16:43.571 }, 00:16:43.571 "method": "bdev_xnvme_create" 00:16:43.571 }, 00:16:43.571 { 00:16:43.571 "method": "bdev_wait_for_examine" 00:16:43.571 } 00:16:43.571 ] 00:16:43.571 } 00:16:43.571 ] 00:16:43.571 } 00:16:43.571 [2024-11-20 17:51:10.634447] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:43.571 [2024-11-20 17:51:10.634721] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73149 ] 00:16:43.830 [2024-11-20 17:51:10.814314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.830 [2024-11-20 17:51:10.924653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.088 Running I/O for 5 seconds... 00:16:46.401 72960.00 IOPS, 285.00 MiB/s [2024-11-20T17:51:14.514Z] 71328.00 IOPS, 278.62 MiB/s [2024-11-20T17:51:15.449Z] 70656.00 IOPS, 276.00 MiB/s [2024-11-20T17:51:16.386Z] 70704.00 IOPS, 276.19 MiB/s [2024-11-20T17:51:16.387Z] 70700.80 IOPS, 276.18 MiB/s 00:16:49.211 Latency(us) 00:16:49.211 [2024-11-20T17:51:16.387Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.211 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:16:49.211 xnvme_bdev : 5.00 70686.58 276.12 0.00 0.00 902.62 365.19 2710.93 00:16:49.211 [2024-11-20T17:51:16.387Z] =================================================================================================================== 00:16:49.211 [2024-11-20T17:51:16.387Z] Total : 70686.58 276.12 0.00 0.00 902.62 365.19 2710.93 00:16:50.589 17:51:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:50.589 17:51:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:16:50.589 17:51:17 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:16:50.589 17:51:17 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:16:50.589 17:51:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:50.589 { 00:16:50.589 "subsystems": [ 00:16:50.589 { 00:16:50.589 "subsystem": "bdev", 00:16:50.589 "config": [ 00:16:50.589 { 00:16:50.589 "params": { 00:16:50.589 "io_mechanism": "io_uring_cmd", 00:16:50.589 "conserve_cpu": true, 00:16:50.589 "filename": "/dev/ng0n1", 00:16:50.589 "name": "xnvme_bdev" 00:16:50.589 }, 00:16:50.589 "method": "bdev_xnvme_create" 00:16:50.589 }, 00:16:50.589 { 00:16:50.589 "method": "bdev_wait_for_examine" 00:16:50.589 } 00:16:50.589 ] 00:16:50.589 } 00:16:50.589 ] 00:16:50.589 } 00:16:50.589 [2024-11-20 17:51:17.454338] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:16:50.589 [2024-11-20 17:51:17.454464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73229 ] 00:16:50.589 [2024-11-20 17:51:17.634125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:50.589 [2024-11-20 17:51:17.745431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:51.157 Running I/O for 5 seconds... 00:16:53.061 57795.00 IOPS, 225.76 MiB/s [2024-11-20T17:51:21.172Z] 50158.00 IOPS, 195.93 MiB/s [2024-11-20T17:51:22.107Z] 45716.00 IOPS, 178.58 MiB/s [2024-11-20T17:51:23.484Z] 45292.00 IOPS, 176.92 MiB/s 00:16:56.308 Latency(us) 00:16:56.308 [2024-11-20T17:51:23.484Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.308 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:16:56.308 xnvme_bdev : 5.00 44386.34 173.38 0.00 0.00 1434.78 168.61 14002.07 00:16:56.308 [2024-11-20T17:51:23.484Z] =================================================================================================================== 00:16:56.308 [2024-11-20T17:51:23.484Z] Total : 44386.34 173.38 0.00 0.00 1434.78 168.61 14002.07 00:16:57.244 ************************************ 00:16:57.244 END TEST xnvme_bdevperf 00:16:57.244 ************************************ 00:16:57.244 00:16:57.244 real 0m27.572s 00:16:57.244 user 0m17.088s 00:16:57.244 sys 0m8.482s 00:16:57.244 17:51:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.244 17:51:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:57.244 17:51:24 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:16:57.244 17:51:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:57.244 17:51:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.244 17:51:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:57.244 ************************************ 00:16:57.244 START TEST xnvme_fio_plugin 00:16:57.244 ************************************ 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:16:57.244 17:51:24 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:57.245 17:51:24 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:16:57.245 { 00:16:57.245 "subsystems": [ 00:16:57.245 { 00:16:57.245 "subsystem": "bdev", 00:16:57.245 "config": [ 00:16:57.245 { 00:16:57.245 "params": { 00:16:57.245 "io_mechanism": "io_uring_cmd", 00:16:57.245 "conserve_cpu": true, 00:16:57.245 "filename": "/dev/ng0n1", 00:16:57.245 "name": "xnvme_bdev" 00:16:57.245 }, 00:16:57.245 "method": "bdev_xnvme_create" 00:16:57.245 }, 00:16:57.245 { 00:16:57.245 "method": "bdev_wait_for_examine" 00:16:57.245 } 00:16:57.245 ] 00:16:57.245 } 00:16:57.245 ] 00:16:57.245 } 00:16:57.504 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:16:57.504 fio-3.35 00:16:57.504 Starting 1 thread 00:17:04.120 00:17:04.120 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73353: Wed Nov 20 17:51:30 2024 00:17:04.120 read: IOPS=30.5k, BW=119MiB/s (125MB/s)(597MiB/5002msec) 00:17:04.120 slat (nsec): min=2572, max=86241, avg=5917.90, stdev=2218.35 00:17:04.120 clat (usec): min=1011, max=3824, avg=1861.64, stdev=293.28 00:17:04.120 lat (usec): min=1014, max=3835, avg=1867.56, stdev=294.31 00:17:04.120 clat percentiles (usec): 00:17:04.120 | 1.00th=[ 1254], 5.00th=[ 1385], 10.00th=[ 1483], 20.00th=[ 1614], 00:17:04.120 | 30.00th=[ 1713], 40.00th=[ 1778], 50.00th=[ 1860], 60.00th=[ 1926], 00:17:04.120 | 70.00th=[ 2008], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 2343], 00:17:04.120 | 99.00th=[ 2573], 99.50th=[ 2671], 99.90th=[ 2900], 99.95th=[ 3130], 00:17:04.120 | 99.99th=[ 3687] 00:17:04.120 bw ( KiB/s): min=109056, max=143360, per=100.00%, avg=122368.00, stdev=9980.72, samples=9 00:17:04.120 iops : min=27264, max=35840, avg=30592.00, stdev=2495.18, samples=9 00:17:04.120 lat (msec) : 2=69.02%, 4=30.98% 00:17:04.120 cpu : usr=51.45%, sys=45.93%, ctx=19, majf=0, minf=762 00:17:04.120 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:17:04.120 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:04.120 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:17:04.120 issued rwts: total=152768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:04.120 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:04.120 00:17:04.120 Run status group 0 (all jobs): 00:17:04.120 READ: bw=119MiB/s (125MB/s), 119MiB/s-119MiB/s (125MB/s-125MB/s), io=597MiB (626MB), run=5002-5002msec 00:17:04.687 ----------------------------------------------------- 00:17:04.688 Suppressions used: 00:17:04.688 count bytes template 00:17:04.688 1 11 /usr/src/fio/parse.c 00:17:04.688 1 8 libtcmalloc_minimal.so 00:17:04.688 1 904 libcrypto.so 00:17:04.688 ----------------------------------------------------- 00:17:04.688 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:04.688 { 00:17:04.688 "subsystems": [ 00:17:04.688 { 00:17:04.688 "subsystem": "bdev", 00:17:04.688 "config": [ 00:17:04.688 { 00:17:04.688 "params": { 00:17:04.688 "io_mechanism": "io_uring_cmd", 00:17:04.688 "conserve_cpu": true, 00:17:04.688 "filename": "/dev/ng0n1", 00:17:04.688 "name": "xnvme_bdev" 00:17:04.688 }, 00:17:04.688 "method": "bdev_xnvme_create" 00:17:04.688 }, 00:17:04.688 { 00:17:04.688 "method": "bdev_wait_for_examine" 00:17:04.688 } 00:17:04.688 ] 00:17:04.688 } 00:17:04.688 ] 00:17:04.688 } 00:17:04.688 17:51:31 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:17:04.946 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:17:04.946 fio-3.35 00:17:04.946 Starting 1 thread 00:17:11.506 00:17:11.506 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73444: Wed Nov 20 17:51:37 2024 00:17:11.506 write: IOPS=31.6k, BW=123MiB/s (129MB/s)(617MiB/5002msec); 0 zone resets 00:17:11.506 slat (usec): min=2, max=183, avg= 5.86, stdev= 2.29 00:17:11.506 clat (usec): min=589, max=3982, avg=1796.44, stdev=329.25 00:17:11.506 lat (usec): min=593, max=3989, avg=1802.30, stdev=330.32 00:17:11.506 clat percentiles (usec): 00:17:11.506 | 1.00th=[ 1106], 5.00th=[ 1270], 10.00th=[ 1369], 20.00th=[ 1500], 00:17:11.506 | 30.00th=[ 1614], 40.00th=[ 1696], 50.00th=[ 1778], 60.00th=[ 1876], 00:17:11.506 | 70.00th=[ 1975], 80.00th=[ 2089], 90.00th=[ 2245], 95.00th=[ 2343], 00:17:11.506 | 99.00th=[ 2573], 99.50th=[ 2638], 99.90th=[ 2900], 99.95th=[ 3064], 00:17:11.506 | 99.99th=[ 3589] 00:17:11.506 bw ( KiB/s): min=103936, max=142336, per=100.00%, avg=126634.67, stdev=12012.92, samples=9 00:17:11.506 iops : min=25984, max=35584, avg=31658.67, stdev=3003.23, samples=9 00:17:11.506 lat (usec) : 750=0.01%, 1000=0.08% 00:17:11.506 lat (msec) : 2=72.71%, 4=27.19% 00:17:11.506 cpu : usr=51.77%, sys=45.59%, ctx=8, majf=0, minf=763 00:17:11.506 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:17:11.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.506 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:17:11.506 issued rwts: total=0,158008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:17:11.506 00:17:11.506 Run status group 0 (all jobs): 00:17:11.506 WRITE: bw=123MiB/s (129MB/s), 123MiB/s-123MiB/s (129MB/s-129MB/s), io=617MiB (647MB), run=5002-5002msec 00:17:12.105 ----------------------------------------------------- 00:17:12.105 Suppressions used: 00:17:12.105 count bytes template 00:17:12.105 1 11 /usr/src/fio/parse.c 00:17:12.105 1 8 libtcmalloc_minimal.so 00:17:12.105 1 904 libcrypto.so 00:17:12.105 ----------------------------------------------------- 00:17:12.105 00:17:12.105 00:17:12.105 real 0m14.811s 00:17:12.105 user 0m8.968s 00:17:12.105 sys 0m5.317s 00:17:12.105 17:51:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.105 ************************************ 00:17:12.105 END TEST xnvme_fio_plugin 00:17:12.105 ************************************ 00:17:12.105 17:51:39 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:17:12.105 17:51:39 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 72912 00:17:12.105 17:51:39 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 72912 ']' 00:17:12.105 17:51:39 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 72912 00:17:12.105 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (72912) - No such process 00:17:12.105 17:51:39 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 72912 is not found' 00:17:12.105 Process with pid 72912 is not found 00:17:12.105 00:17:12.105 real 3m51.834s 00:17:12.105 user 2m5.616s 00:17:12.105 sys 1m28.646s 00:17:12.105 ************************************ 00:17:12.105 END TEST nvme_xnvme 00:17:12.105 ************************************ 00:17:12.105 17:51:39 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.105 17:51:39 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.105 17:51:39 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:12.105 17:51:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:12.105 17:51:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.105 17:51:39 -- common/autotest_common.sh@10 -- # set +x 00:17:12.105 ************************************ 00:17:12.105 START TEST blockdev_xnvme 00:17:12.105 ************************************ 00:17:12.105 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:17:12.373 * Looking for test storage... 00:17:12.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.373 17:51:39 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.373 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.373 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.373 --rc genhtml_branch_coverage=1 00:17:12.373 --rc genhtml_function_coverage=1 00:17:12.373 --rc genhtml_legend=1 00:17:12.373 --rc geninfo_all_blocks=1 00:17:12.373 --rc geninfo_unexecuted_blocks=1 00:17:12.373 00:17:12.373 ' 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.374 --rc genhtml_branch_coverage=1 00:17:12.374 --rc genhtml_function_coverage=1 00:17:12.374 --rc genhtml_legend=1 00:17:12.374 --rc geninfo_all_blocks=1 00:17:12.374 --rc geninfo_unexecuted_blocks=1 00:17:12.374 00:17:12.374 ' 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.374 --rc genhtml_branch_coverage=1 00:17:12.374 --rc genhtml_function_coverage=1 00:17:12.374 --rc genhtml_legend=1 00:17:12.374 --rc geninfo_all_blocks=1 00:17:12.374 --rc geninfo_unexecuted_blocks=1 00:17:12.374 00:17:12.374 ' 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.374 --rc genhtml_branch_coverage=1 00:17:12.374 --rc genhtml_function_coverage=1 00:17:12.374 --rc genhtml_legend=1 00:17:12.374 --rc geninfo_all_blocks=1 00:17:12.374 --rc geninfo_unexecuted_blocks=1 00:17:12.374 00:17:12.374 ' 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73584 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73584 00:17:12.374 17:51:39 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73584 ']' 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:12.374 17:51:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:12.633 [2024-11-20 17:51:39.603882] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:12.633 [2024-11-20 17:51:39.604132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73584 ] 00:17:12.633 [2024-11-20 17:51:39.786902] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.892 [2024-11-20 17:51:39.903025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.829 17:51:40 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:13.829 17:51:40 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:17:13.829 17:51:40 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:17:13.829 17:51:40 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:17:13.829 17:51:40 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:17:13.829 17:51:40 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:17:13.829 17:51:40 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:17:14.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:14.965 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:17:14.965 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:17:14.965 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:17:14.965 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:17:14.965 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:17:14.965 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2c2n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:17:15.226 nvme0n1 00:17:15.226 nvme0n2 00:17:15.226 nvme0n3 00:17:15.226 nvme1n1 00:17:15.226 nvme2n1 00:17:15.226 nvme3n1 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:15.226 17:51:42 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.226 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:17:15.227 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0688e9ce-da2f-48b8-b09b-5ce0150e21d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0688e9ce-da2f-48b8-b09b-5ce0150e21d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9fb72e65-dcbc-4bb6-a1b9-8100bc90ee5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fb72e65-dcbc-4bb6-a1b9-8100bc90ee5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6a1207d7-eca2-42bb-9517-d2f0ccac0f4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a1207d7-eca2-42bb-9517-d2f0ccac0f4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3126232b-f2a7-4000-accc-02adc607f859"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3126232b-f2a7-4000-accc-02adc607f859",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "164213aa-bfae-432c-a44b-eed75d337a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "164213aa-bfae-432c-a44b-eed75d337a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "65e8147d-1cc7-4c56-aa5e-1dedbd6370a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "65e8147d-1cc7-4c56-aa5e-1dedbd6370a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:15.227 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:17:15.486 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:17:15.486 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:17:15.486 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:17:15.486 17:51:42 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 73584 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73584 ']' 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73584 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73584 00:17:15.486 killing process with pid 73584 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73584' 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73584 00:17:15.486 17:51:42 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73584 00:17:18.031 17:51:44 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:18.031 17:51:44 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:18.031 17:51:44 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:18.031 17:51:44 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.031 17:51:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:18.031 ************************************ 00:17:18.031 START TEST bdev_hello_world 00:17:18.031 ************************************ 00:17:18.031 17:51:44 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:17:18.031 [2024-11-20 17:51:44.969886] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:18.031 [2024-11-20 17:51:44.970010] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73879 ] 00:17:18.031 [2024-11-20 17:51:45.150427] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.311 [2024-11-20 17:51:45.265985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.570 [2024-11-20 17:51:45.735476] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:17:18.570 [2024-11-20 17:51:45.735678] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:17:18.570 [2024-11-20 17:51:45.735707] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:17:18.570 [2024-11-20 17:51:45.737791] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:17:18.570 [2024-11-20 17:51:45.738111] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:17:18.570 [2024-11-20 17:51:45.738131] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:17:18.570 [2024-11-20 17:51:45.738275] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:17:18.570 00:17:18.570 [2024-11-20 17:51:45.738295] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:17:19.947 00:17:19.947 real 0m1.974s 00:17:19.947 ************************************ 00:17:19.948 END TEST bdev_hello_world 00:17:19.948 ************************************ 00:17:19.948 user 0m1.611s 00:17:19.948 sys 0m0.243s 00:17:19.948 17:51:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:19.948 17:51:46 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 17:51:46 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:17:19.948 17:51:46 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:19.948 17:51:46 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:19.948 17:51:46 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 ************************************ 00:17:19.948 START TEST bdev_bounds 00:17:19.948 ************************************ 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=73925 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:17:19.948 Process bdevio pid: 73925 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 73925' 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 73925 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 73925 ']' 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.948 17:51:46 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:19.948 [2024-11-20 17:51:47.010100] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:19.948 [2024-11-20 17:51:47.010404] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73925 ] 00:17:20.207 [2024-11-20 17:51:47.194198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:20.207 [2024-11-20 17:51:47.319047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.207 [2024-11-20 17:51:47.319194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.207 [2024-11-20 17:51:47.319236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.774 17:51:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:20.774 17:51:47 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:17:20.774 17:51:47 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:17:20.774 I/O targets: 00:17:20.774 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.774 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.774 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:17:20.774 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:17:20.774 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:17:20.774 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:17:20.774 00:17:20.774 00:17:20.774 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.774 http://cunit.sourceforge.net/ 00:17:20.774 00:17:20.774 00:17:20.774 Suite: bdevio tests on: nvme3n1 00:17:20.774 Test: blockdev write read block ...passed 00:17:20.774 Test: blockdev write zeroes read block ...passed 00:17:20.774 Test: blockdev write zeroes read no split ...passed 00:17:21.033 Test: blockdev write zeroes read split ...passed 00:17:21.033 Test: blockdev write zeroes read split partial ...passed 00:17:21.033 Test: blockdev reset ...passed 00:17:21.033 Test: blockdev write read 8 blocks ...passed 00:17:21.033 Test: blockdev write read size > 128k ...passed 00:17:21.033 Test: blockdev write read invalid size ...passed 00:17:21.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.033 Test: blockdev write read max offset ...passed 00:17:21.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.033 Test: blockdev writev readv 8 blocks ...passed 00:17:21.033 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.033 Test: blockdev writev readv block ...passed 00:17:21.033 Test: blockdev writev readv size > 128k ...passed 00:17:21.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.033 Test: blockdev comparev and writev ...passed 00:17:21.033 Test: blockdev nvme passthru rw ...passed 00:17:21.033 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.033 Test: blockdev nvme admin passthru ...passed 00:17:21.033 Test: blockdev copy ...passed 00:17:21.033 Suite: bdevio tests on: nvme2n1 00:17:21.033 Test: blockdev write read block ...passed 00:17:21.033 Test: blockdev write zeroes read block ...passed 00:17:21.033 Test: blockdev write zeroes read no split ...passed 00:17:21.033 Test: blockdev write zeroes read split ...passed 00:17:21.033 Test: blockdev write zeroes read split partial ...passed 00:17:21.033 Test: blockdev reset ...passed 00:17:21.033 Test: blockdev write read 8 blocks ...passed 00:17:21.033 Test: blockdev write read size > 128k ...passed 00:17:21.033 Test: blockdev write read invalid size ...passed 00:17:21.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.033 Test: blockdev write read max offset ...passed 00:17:21.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.033 Test: blockdev writev readv 8 blocks ...passed 00:17:21.033 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.033 Test: blockdev writev readv block ...passed 00:17:21.033 Test: blockdev writev readv size > 128k ...passed 00:17:21.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.033 Test: blockdev comparev and writev ...passed 00:17:21.033 Test: blockdev nvme passthru rw ...passed 00:17:21.033 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.033 Test: blockdev nvme admin passthru ...passed 00:17:21.033 Test: blockdev copy ...passed 00:17:21.033 Suite: bdevio tests on: nvme1n1 00:17:21.033 Test: blockdev write read block ...passed 00:17:21.033 Test: blockdev write zeroes read block ...passed 00:17:21.033 Test: blockdev write zeroes read no split ...passed 00:17:21.033 Test: blockdev write zeroes read split ...passed 00:17:21.033 Test: blockdev write zeroes read split partial ...passed 00:17:21.033 Test: blockdev reset ...passed 00:17:21.033 Test: blockdev write read 8 blocks ...passed 00:17:21.033 Test: blockdev write read size > 128k ...passed 00:17:21.033 Test: blockdev write read invalid size ...passed 00:17:21.033 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.033 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.033 Test: blockdev write read max offset ...passed 00:17:21.033 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.033 Test: blockdev writev readv 8 blocks ...passed 00:17:21.033 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.033 Test: blockdev writev readv block ...passed 00:17:21.033 Test: blockdev writev readv size > 128k ...passed 00:17:21.033 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.033 Test: blockdev comparev and writev ...passed 00:17:21.033 Test: blockdev nvme passthru rw ...passed 00:17:21.034 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.034 Test: blockdev nvme admin passthru ...passed 00:17:21.034 Test: blockdev copy ...passed 00:17:21.034 Suite: bdevio tests on: nvme0n3 00:17:21.034 Test: blockdev write read block ...passed 00:17:21.034 Test: blockdev write zeroes read block ...passed 00:17:21.034 Test: blockdev write zeroes read no split ...passed 00:17:21.034 Test: blockdev write zeroes read split ...passed 00:17:21.293 Test: blockdev write zeroes read split partial ...passed 00:17:21.293 Test: blockdev reset ...passed 00:17:21.293 Test: blockdev write read 8 blocks ...passed 00:17:21.293 Test: blockdev write read size > 128k ...passed 00:17:21.293 Test: blockdev write read invalid size ...passed 00:17:21.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.293 Test: blockdev write read max offset ...passed 00:17:21.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.293 Test: blockdev writev readv 8 blocks ...passed 00:17:21.293 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.293 Test: blockdev writev readv block ...passed 00:17:21.293 Test: blockdev writev readv size > 128k ...passed 00:17:21.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.293 Test: blockdev comparev and writev ...passed 00:17:21.293 Test: blockdev nvme passthru rw ...passed 00:17:21.293 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.293 Test: blockdev nvme admin passthru ...passed 00:17:21.293 Test: blockdev copy ...passed 00:17:21.293 Suite: bdevio tests on: nvme0n2 00:17:21.293 Test: blockdev write read block ...passed 00:17:21.293 Test: blockdev write zeroes read block ...passed 00:17:21.293 Test: blockdev write zeroes read no split ...passed 00:17:21.293 Test: blockdev write zeroes read split ...passed 00:17:21.293 Test: blockdev write zeroes read split partial ...passed 00:17:21.293 Test: blockdev reset ...passed 00:17:21.293 Test: blockdev write read 8 blocks ...passed 00:17:21.293 Test: blockdev write read size > 128k ...passed 00:17:21.293 Test: blockdev write read invalid size ...passed 00:17:21.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.293 Test: blockdev write read max offset ...passed 00:17:21.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.293 Test: blockdev writev readv 8 blocks ...passed 00:17:21.293 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.293 Test: blockdev writev readv block ...passed 00:17:21.293 Test: blockdev writev readv size > 128k ...passed 00:17:21.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.293 Test: blockdev comparev and writev ...passed 00:17:21.293 Test: blockdev nvme passthru rw ...passed 00:17:21.293 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.293 Test: blockdev nvme admin passthru ...passed 00:17:21.293 Test: blockdev copy ...passed 00:17:21.293 Suite: bdevio tests on: nvme0n1 00:17:21.293 Test: blockdev write read block ...passed 00:17:21.293 Test: blockdev write zeroes read block ...passed 00:17:21.293 Test: blockdev write zeroes read no split ...passed 00:17:21.293 Test: blockdev write zeroes read split ...passed 00:17:21.293 Test: blockdev write zeroes read split partial ...passed 00:17:21.293 Test: blockdev reset ...passed 00:17:21.293 Test: blockdev write read 8 blocks ...passed 00:17:21.293 Test: blockdev write read size > 128k ...passed 00:17:21.293 Test: blockdev write read invalid size ...passed 00:17:21.293 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:21.293 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:21.293 Test: blockdev write read max offset ...passed 00:17:21.293 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:21.293 Test: blockdev writev readv 8 blocks ...passed 00:17:21.293 Test: blockdev writev readv 30 x 1block ...passed 00:17:21.293 Test: blockdev writev readv block ...passed 00:17:21.293 Test: blockdev writev readv size > 128k ...passed 00:17:21.293 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:21.293 Test: blockdev comparev and writev ...passed 00:17:21.293 Test: blockdev nvme passthru rw ...passed 00:17:21.293 Test: blockdev nvme passthru vendor specific ...passed 00:17:21.293 Test: blockdev nvme admin passthru ...passed 00:17:21.293 Test: blockdev copy ...passed 00:17:21.293 00:17:21.293 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.293 suites 6 6 n/a 0 0 00:17:21.293 tests 138 138 138 0 0 00:17:21.293 asserts 780 780 780 0 n/a 00:17:21.293 00:17:21.293 Elapsed time = 1.292 seconds 00:17:21.293 0 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 73925 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 73925 ']' 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 73925 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73925 00:17:21.293 killing process with pid 73925 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73925' 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 73925 00:17:21.293 17:51:48 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 73925 00:17:22.669 ************************************ 00:17:22.669 END TEST bdev_bounds 00:17:22.669 ************************************ 00:17:22.669 17:51:49 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:17:22.669 00:17:22.669 real 0m2.688s 00:17:22.669 user 0m6.624s 00:17:22.669 sys 0m0.383s 00:17:22.669 17:51:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.669 17:51:49 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 17:51:49 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:22.669 17:51:49 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:22.669 17:51:49 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.669 17:51:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 ************************************ 00:17:22.669 START TEST bdev_nbd 00:17:22.669 ************************************ 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=73979 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 73979 /var/tmp/spdk-nbd.sock 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 73979 ']' 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:22.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:22.669 17:51:49 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:22.669 [2024-11-20 17:51:49.776890] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:22.669 [2024-11-20 17:51:49.777496] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.928 [2024-11-20 17:51:49.952298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.928 [2024-11-20 17:51:50.066929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.496 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:23.786 1+0 records in 00:17:23.786 1+0 records out 00:17:23.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109911 s, 3.7 MB/s 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:23.786 17:51:50 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:17:24.054 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:17:24.054 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:17:24.054 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:17:24.054 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:24.054 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.055 1+0 records in 00:17:24.055 1+0 records out 00:17:24.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000965384 s, 4.2 MB/s 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.055 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.316 1+0 records in 00:17:24.316 1+0 records out 00:17:24.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000961948 s, 4.3 MB/s 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.316 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.575 1+0 records in 00:17:24.575 1+0 records out 00:17:24.575 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000822555 s, 5.0 MB/s 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.575 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:24.835 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:24.835 1+0 records in 00:17:24.835 1+0 records out 00:17:24.835 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000582042 s, 7.0 MB/s 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:24.836 17:51:51 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:25.095 1+0 records in 00:17:25.095 1+0 records out 00:17:25.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000558975 s, 7.3 MB/s 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:17:25.095 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd0", 00:17:25.355 "bdev_name": "nvme0n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd1", 00:17:25.355 "bdev_name": "nvme0n2" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd2", 00:17:25.355 "bdev_name": "nvme0n3" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd3", 00:17:25.355 "bdev_name": "nvme1n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd4", 00:17:25.355 "bdev_name": "nvme2n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd5", 00:17:25.355 "bdev_name": "nvme3n1" 00:17:25.355 } 00:17:25.355 ]' 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd0", 00:17:25.355 "bdev_name": "nvme0n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd1", 00:17:25.355 "bdev_name": "nvme0n2" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd2", 00:17:25.355 "bdev_name": "nvme0n3" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd3", 00:17:25.355 "bdev_name": "nvme1n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd4", 00:17:25.355 "bdev_name": "nvme2n1" 00:17:25.355 }, 00:17:25.355 { 00:17:25.355 "nbd_device": "/dev/nbd5", 00:17:25.355 "bdev_name": "nvme3n1" 00:17:25.355 } 00:17:25.355 ]' 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.355 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.615 17:51:52 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:25.874 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.133 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.393 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:26.652 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:26.911 17:51:53 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:17:27.170 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.171 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:17:27.430 /dev/nbd0 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.430 1+0 records in 00:17:27.430 1+0 records out 00:17:27.430 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529592 s, 7.7 MB/s 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.430 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:17:27.689 /dev/nbd1 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.689 1+0 records in 00:17:27.689 1+0 records out 00:17:27.689 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000654339 s, 6.3 MB/s 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.689 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.690 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:17:27.948 /dev/nbd10 00:17:27.948 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:17:27.948 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:17:27.948 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:17:27.948 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:27.949 1+0 records in 00:17:27.949 1+0 records out 00:17:27.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000641824 s, 6.4 MB/s 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:27.949 17:51:54 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:17:28.208 /dev/nbd11 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.208 1+0 records in 00:17:28.208 1+0 records out 00:17:28.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000879724 s, 4.7 MB/s 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.208 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:17:28.467 /dev/nbd12 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.467 1+0 records in 00:17:28.467 1+0 records out 00:17:28.467 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000799605 s, 5.1 MB/s 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.467 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:17:28.725 /dev/nbd13 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:28.725 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:28.725 1+0 records in 00:17:28.725 1+0 records out 00:17:28.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000665677 s, 6.2 MB/s 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:28.726 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd0", 00:17:28.985 "bdev_name": "nvme0n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd1", 00:17:28.985 "bdev_name": "nvme0n2" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd10", 00:17:28.985 "bdev_name": "nvme0n3" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd11", 00:17:28.985 "bdev_name": "nvme1n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd12", 00:17:28.985 "bdev_name": "nvme2n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd13", 00:17:28.985 "bdev_name": "nvme3n1" 00:17:28.985 } 00:17:28.985 ]' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd0", 00:17:28.985 "bdev_name": "nvme0n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd1", 00:17:28.985 "bdev_name": "nvme0n2" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd10", 00:17:28.985 "bdev_name": "nvme0n3" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd11", 00:17:28.985 "bdev_name": "nvme1n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd12", 00:17:28.985 "bdev_name": "nvme2n1" 00:17:28.985 }, 00:17:28.985 { 00:17:28.985 "nbd_device": "/dev/nbd13", 00:17:28.985 "bdev_name": "nvme3n1" 00:17:28.985 } 00:17:28.985 ]' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:28.985 /dev/nbd1 00:17:28.985 /dev/nbd10 00:17:28.985 /dev/nbd11 00:17:28.985 /dev/nbd12 00:17:28.985 /dev/nbd13' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:28.985 /dev/nbd1 00:17:28.985 /dev/nbd10 00:17:28.985 /dev/nbd11 00:17:28.985 /dev/nbd12 00:17:28.985 /dev/nbd13' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:17:28.985 256+0 records in 00:17:28.985 256+0 records out 00:17:28.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119971 s, 87.4 MB/s 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:28.985 17:51:55 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:28.985 256+0 records in 00:17:28.985 256+0 records out 00:17:28.985 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117554 s, 8.9 MB/s 00:17:28.985 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:28.985 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:29.244 256+0 records in 00:17:29.244 256+0 records out 00:17:29.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12089 s, 8.7 MB/s 00:17:29.244 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.244 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:17:29.244 256+0 records in 00:17:29.244 256+0 records out 00:17:29.244 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116608 s, 9.0 MB/s 00:17:29.244 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.244 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:17:29.502 256+0 records in 00:17:29.502 256+0 records out 00:17:29.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.145878 s, 7.2 MB/s 00:17:29.502 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.502 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:17:29.502 256+0 records in 00:17:29.502 256+0 records out 00:17:29.502 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122745 s, 8.5 MB/s 00:17:29.502 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:29.503 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:17:29.764 256+0 records in 00:17:29.764 256+0 records out 00:17:29.764 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129344 s, 8.1 MB/s 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:29.764 17:51:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.035 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.293 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.551 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:17:30.809 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:30.810 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.810 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.810 17:51:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.068 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:17:31.326 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:17:31.585 malloc_lvol_verify 00:17:31.585 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:17:31.844 5d25c140-5ac5-47e1-a753-00eabe2e0317 00:17:31.844 17:51:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:17:32.104 635b7f20-83cb-4647-9ab2-0b1489ed08f9 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:17:32.104 /dev/nbd0 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:17:32.104 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:17:32.104 mke2fs 1.47.0 (5-Feb-2023) 00:17:32.104 Discarding device blocks: 0/4096 done 00:17:32.104 Creating filesystem with 4096 1k blocks and 1024 inodes 00:17:32.104 00:17:32.104 Allocating group tables: 0/1 done 00:17:32.105 Writing inode tables: 0/1 done 00:17:32.105 Creating journal (1024 blocks): done 00:17:32.105 Writing superblocks and filesystem accounting information: 0/1 done 00:17:32.105 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:32.105 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 73979 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 73979 ']' 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 73979 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73979 00:17:32.366 killing process with pid 73979 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73979' 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 73979 00:17:32.366 17:51:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 73979 00:17:33.748 17:52:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:17:33.748 00:17:33.748 real 0m11.189s 00:17:33.748 user 0m14.422s 00:17:33.748 sys 0m4.690s 00:17:33.748 ************************************ 00:17:33.748 END TEST bdev_nbd 00:17:33.748 ************************************ 00:17:33.748 17:52:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:33.748 17:52:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:17:34.008 17:52:00 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:17:34.008 17:52:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:17:34.008 17:52:00 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:17:34.009 17:52:00 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:17:34.009 17:52:00 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:34.009 17:52:00 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.009 17:52:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:34.009 ************************************ 00:17:34.009 START TEST bdev_fio 00:17:34.009 ************************************ 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:17:34.009 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:17:34.009 17:52:00 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:34.009 ************************************ 00:17:34.009 START TEST bdev_fio_rw_verify 00:17:34.009 ************************************ 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:34.009 17:52:01 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:17:34.269 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:17:34.269 fio-3.35 00:17:34.269 Starting 6 threads 00:17:46.473 00:17:46.473 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74403: Wed Nov 20 17:52:12 2024 00:17:46.473 read: IOPS=33.3k, BW=130MiB/s (136MB/s)(1300MiB/10002msec) 00:17:46.473 slat (usec): min=2, max=987, avg= 5.99, stdev= 4.50 00:17:46.473 clat (usec): min=77, max=4555, avg=565.40, stdev=215.43 00:17:46.473 lat (usec): min=79, max=4565, avg=571.39, stdev=216.16 00:17:46.473 clat percentiles (usec): 00:17:46.473 | 50.000th=[ 578], 99.000th=[ 1156], 99.900th=[ 2147], 99.990th=[ 3851], 00:17:46.473 | 99.999th=[ 4490] 00:17:46.473 write: IOPS=33.5k, BW=131MiB/s (137MB/s)(1309MiB/10002msec); 0 zone resets 00:17:46.473 slat (usec): min=7, max=3196, avg=22.37, stdev=29.11 00:17:46.473 clat (usec): min=88, max=5566, avg=648.21, stdev=239.95 00:17:46.473 lat (usec): min=104, max=5607, avg=670.58, stdev=243.84 00:17:46.473 clat percentiles (usec): 00:17:46.473 | 50.000th=[ 644], 99.000th=[ 1401], 99.900th=[ 2147], 99.990th=[ 5407], 00:17:46.473 | 99.999th=[ 5538] 00:17:46.473 bw ( KiB/s): min=102977, max=160679, per=100.00%, avg=134753.47, stdev=2486.70, samples=114 00:17:46.473 iops : min=25744, max=40169, avg=33687.68, stdev=621.65, samples=114 00:17:46.473 lat (usec) : 100=0.01%, 250=4.66%, 500=24.81%, 750=51.55%, 1000=14.83% 00:17:46.473 lat (msec) : 2=4.00%, 4=0.13%, 10=0.01% 00:17:46.473 cpu : usr=59.77%, sys=26.77%, ctx=7794, majf=0, minf=27561 00:17:46.473 IO depths : 1=12.0%, 2=24.4%, 4=50.6%, 8=13.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:46.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.474 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.474 issued rwts: total=332731,335085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.474 latency : target=0, window=0, percentile=100.00%, depth=8 00:17:46.474 00:17:46.474 Run status group 0 (all jobs): 00:17:46.474 READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1300MiB (1363MB), run=10002-10002msec 00:17:46.474 WRITE: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=1309MiB (1373MB), run=10002-10002msec 00:17:46.733 ----------------------------------------------------- 00:17:46.733 Suppressions used: 00:17:46.733 count bytes template 00:17:46.733 6 48 /usr/src/fio/parse.c 00:17:46.733 2096 201216 /usr/src/fio/iolog.c 00:17:46.733 1 8 libtcmalloc_minimal.so 00:17:46.733 1 904 libcrypto.so 00:17:46.733 ----------------------------------------------------- 00:17:46.733 00:17:46.733 00:17:46.733 real 0m12.674s 00:17:46.733 user 0m37.923s 00:17:46.733 sys 0m16.587s 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:17:46.733 ************************************ 00:17:46.733 END TEST bdev_fio_rw_verify 00:17:46.733 ************************************ 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:17:46.733 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "0688e9ce-da2f-48b8-b09b-5ce0150e21d4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0688e9ce-da2f-48b8-b09b-5ce0150e21d4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "9fb72e65-dcbc-4bb6-a1b9-8100bc90ee5d"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9fb72e65-dcbc-4bb6-a1b9-8100bc90ee5d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "6a1207d7-eca2-42bb-9517-d2f0ccac0f4f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6a1207d7-eca2-42bb-9517-d2f0ccac0f4f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "3126232b-f2a7-4000-accc-02adc607f859"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "3126232b-f2a7-4000-accc-02adc607f859",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "164213aa-bfae-432c-a44b-eed75d337a37"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "164213aa-bfae-432c-a44b-eed75d337a37",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "65e8147d-1cc7-4c56-aa5e-1dedbd6370a4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "65e8147d-1cc7-4c56-aa5e-1dedbd6370a4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:17:46.734 /home/vagrant/spdk_repo/spdk 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:17:46.734 00:17:46.734 real 0m12.877s 00:17:46.734 user 0m38.021s 00:17:46.734 sys 0m16.695s 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:46.734 17:52:13 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:17:46.734 ************************************ 00:17:46.734 END TEST bdev_fio 00:17:46.734 ************************************ 00:17:46.734 17:52:13 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:46.734 17:52:13 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.734 17:52:13 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:46.734 17:52:13 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:46.734 17:52:13 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:46.734 ************************************ 00:17:46.734 START TEST bdev_verify 00:17:46.734 ************************************ 00:17:46.734 17:52:13 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:17:46.993 [2024-11-20 17:52:13.982559] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:46.993 [2024-11-20 17:52:13.982705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74573 ] 00:17:47.253 [2024-11-20 17:52:14.177396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:47.253 [2024-11-20 17:52:14.299350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.253 [2024-11-20 17:52:14.299386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.821 Running I/O for 5 seconds... 00:17:50.134 25056.00 IOPS, 97.88 MiB/s [2024-11-20T17:52:18.244Z] 24912.00 IOPS, 97.31 MiB/s [2024-11-20T17:52:19.180Z] 24725.33 IOPS, 96.58 MiB/s [2024-11-20T17:52:20.115Z] 24672.00 IOPS, 96.38 MiB/s 00:17:52.939 Latency(us) 00:17:52.939 [2024-11-20T17:52:20.115Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:52.939 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.939 Verification LBA range: start 0x0 length 0x80000 00:17:52.939 nvme0n1 : 5.05 1873.90 7.32 0.00 0.00 68193.80 11791.22 77485.13 00:17:52.939 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.939 Verification LBA range: start 0x80000 length 0x80000 00:17:52.939 nvme0n1 : 5.01 1838.00 7.18 0.00 0.00 69534.49 13475.68 65272.80 00:17:52.939 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.939 Verification LBA range: start 0x0 length 0x80000 00:17:52.939 nvme0n2 : 5.06 1870.20 7.31 0.00 0.00 68229.81 15686.53 66115.03 00:17:52.940 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x80000 length 0x80000 00:17:52.940 nvme0n2 : 5.05 1849.58 7.22 0.00 0.00 68992.16 6895.76 69905.07 00:17:52.940 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x0 length 0x80000 00:17:52.940 nvme0n3 : 5.07 1869.38 7.30 0.00 0.00 68171.81 11685.94 62325.00 00:17:52.940 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x80000 length 0x80000 00:17:52.940 nvme0n3 : 5.03 1831.62 7.15 0.00 0.00 69554.90 5869.29 68220.61 00:17:52.940 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x0 length 0xbd0bd 00:17:52.940 nvme1n1 : 5.07 2686.70 10.49 0.00 0.00 47319.34 6316.72 51165.46 00:17:52.940 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:17:52.940 nvme1n1 : 5.05 2725.73 10.65 0.00 0.00 46613.70 5606.09 60640.54 00:17:52.940 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x0 length 0x20000 00:17:52.940 nvme2n1 : 5.08 1890.86 7.39 0.00 0.00 67221.16 6948.40 71589.53 00:17:52.940 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x20000 length 0x20000 00:17:52.940 nvme2n1 : 5.05 1848.75 7.22 0.00 0.00 68588.38 7632.71 64851.69 00:17:52.940 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0x0 length 0xa0000 00:17:52.940 nvme3n1 : 5.07 1867.87 7.30 0.00 0.00 67954.77 5184.98 76221.79 00:17:52.940 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:17:52.940 Verification LBA range: start 0xa0000 length 0xa0000 00:17:52.940 nvme3n1 : 5.05 1850.22 7.23 0.00 0.00 68482.17 6843.12 62325.00 00:17:52.940 [2024-11-20T17:52:20.117Z] =================================================================================================================== 00:17:52.941 [2024-11-20T17:52:20.117Z] Total : 24002.82 93.76 0.00 0.00 63626.37 5184.98 77485.13 00:17:54.316 00:17:54.316 real 0m7.166s 00:17:54.316 user 0m10.832s 00:17:54.316 sys 0m2.145s 00:17:54.316 17:52:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:54.316 17:52:21 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:17:54.316 ************************************ 00:17:54.316 END TEST bdev_verify 00:17:54.316 ************************************ 00:17:54.316 17:52:21 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:54.316 17:52:21 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:17:54.316 17:52:21 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:54.316 17:52:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:17:54.316 ************************************ 00:17:54.316 START TEST bdev_verify_big_io 00:17:54.316 ************************************ 00:17:54.316 17:52:21 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:17:54.316 [2024-11-20 17:52:21.218890] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:17:54.316 [2024-11-20 17:52:21.219027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74682 ] 00:17:54.316 [2024-11-20 17:52:21.402737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:54.573 [2024-11-20 17:52:21.517624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.573 [2024-11-20 17:52:21.517657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.137 Running I/O for 5 seconds... 00:18:00.321 3224.00 IOPS, 201.50 MiB/s [2024-11-20T17:52:28.066Z] 3626.00 IOPS, 226.62 MiB/s [2024-11-20T17:52:28.066Z] 3908.00 IOPS, 244.25 MiB/s 00:18:00.890 Latency(us) 00:18:00.890 [2024-11-20T17:52:28.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.890 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x0 length 0x8000 00:18:00.890 nvme0n1 : 5.58 177.93 11.12 0.00 0.00 687399.54 55166.05 751268.91 00:18:00.890 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x8000 length 0x8000 00:18:00.890 nvme0n1 : 5.67 166.63 10.41 0.00 0.00 753856.38 5237.62 764744.58 00:18:00.890 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x0 length 0x8000 00:18:00.890 nvme0n2 : 5.52 179.58 11.22 0.00 0.00 673864.03 8685.49 1078054.04 00:18:00.890 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x8000 length 0x8000 00:18:00.890 nvme0n2 : 5.67 144.00 9.00 0.00 0.00 847119.24 50323.23 1030889.18 00:18:00.890 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x0 length 0x8000 00:18:00.890 nvme0n3 : 5.58 151.30 9.46 0.00 0.00 781002.16 56008.28 1556440.52 00:18:00.890 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.890 Verification LBA range: start 0x8000 length 0x8000 00:18:00.891 nvme0n3 : 5.66 138.55 8.66 0.00 0.00 862297.94 64430.57 1361043.23 00:18:00.891 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0x0 length 0xbd0b 00:18:00.891 nvme1n1 : 5.63 210.28 13.14 0.00 0.00 556528.63 7948.54 909608.10 00:18:00.891 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0xbd0b length 0xbd0b 00:18:00.891 nvme1n1 : 5.67 211.70 13.23 0.00 0.00 551201.71 48217.65 700735.13 00:18:00.891 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0x0 length 0x2000 00:18:00.891 nvme2n1 : 5.63 167.79 10.49 0.00 0.00 682139.16 35163.09 1617081.06 00:18:00.891 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0x2000 length 0x2000 00:18:00.891 nvme2n1 : 5.67 163.56 10.22 0.00 0.00 692280.21 12212.33 1172383.77 00:18:00.891 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0x0 length 0xa000 00:18:00.891 nvme3n1 : 5.68 202.93 12.68 0.00 0.00 552901.31 1664.72 599667.56 00:18:00.891 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:00.891 Verification LBA range: start 0xa000 length 0xa000 00:18:00.891 nvme3n1 : 5.68 166.22 10.39 0.00 0.00 670939.49 6948.40 747899.99 00:18:00.891 [2024-11-20T17:52:28.067Z] =================================================================================================================== 00:18:00.891 [2024-11-20T17:52:28.067Z] Total : 2080.48 130.03 0.00 0.00 679420.60 1664.72 1617081.06 00:18:02.266 00:18:02.266 real 0m8.271s 00:18:02.266 user 0m14.971s 00:18:02.266 sys 0m0.580s 00:18:02.266 17:52:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:02.266 17:52:29 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:02.266 ************************************ 00:18:02.266 END TEST bdev_verify_big_io 00:18:02.266 ************************************ 00:18:02.525 17:52:29 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:02.525 17:52:29 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:02.525 17:52:29 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:02.525 17:52:29 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:02.525 ************************************ 00:18:02.525 START TEST bdev_write_zeroes 00:18:02.525 ************************************ 00:18:02.525 17:52:29 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:02.525 [2024-11-20 17:52:29.564341] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:02.525 [2024-11-20 17:52:29.564471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74796 ] 00:18:02.783 [2024-11-20 17:52:29.743950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.783 [2024-11-20 17:52:29.892493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.350 Running I/O for 1 seconds... 00:18:04.725 44074.00 IOPS, 172.16 MiB/s 00:18:04.725 Latency(us) 00:18:04.725 [2024-11-20T17:52:31.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.726 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme0n1 : 1.03 6615.92 25.84 0.00 0.00 19328.38 8053.82 34320.86 00:18:04.726 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme0n2 : 1.03 6606.73 25.81 0.00 0.00 19341.51 8053.82 34320.86 00:18:04.726 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme0n3 : 1.03 6598.12 25.77 0.00 0.00 19352.33 8053.82 34110.30 00:18:04.726 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme1n1 : 1.03 10678.57 41.71 0.00 0.00 11945.76 3737.39 24424.66 00:18:04.726 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme2n1 : 1.04 6548.55 25.58 0.00 0.00 19404.27 4237.47 34952.53 00:18:04.726 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:04.726 nvme3n1 : 1.04 6541.35 25.55 0.00 0.00 19402.69 3737.39 34531.42 00:18:04.726 [2024-11-20T17:52:31.902Z] =================================================================================================================== 00:18:04.726 [2024-11-20T17:52:31.902Z] Total : 43589.23 170.27 0.00 0.00 17542.58 3737.39 34952.53 00:18:05.662 00:18:05.662 real 0m3.280s 00:18:05.662 user 0m2.427s 00:18:05.662 sys 0m0.666s 00:18:05.662 17:52:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:05.662 17:52:32 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:05.662 ************************************ 00:18:05.662 END TEST bdev_write_zeroes 00:18:05.662 ************************************ 00:18:05.662 17:52:32 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.662 17:52:32 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:05.662 17:52:32 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:05.662 17:52:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:05.662 ************************************ 00:18:05.662 START TEST bdev_json_nonenclosed 00:18:05.662 ************************************ 00:18:05.662 17:52:32 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:05.920 [2024-11-20 17:52:32.920231] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:05.920 [2024-11-20 17:52:32.920347] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74855 ] 00:18:06.179 [2024-11-20 17:52:33.102434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.179 [2024-11-20 17:52:33.213918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.179 [2024-11-20 17:52:33.214015] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:06.179 [2024-11-20 17:52:33.214038] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:06.179 [2024-11-20 17:52:33.214050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:06.439 00:18:06.439 real 0m0.642s 00:18:06.439 user 0m0.389s 00:18:06.439 sys 0m0.149s 00:18:06.439 17:52:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:06.439 17:52:33 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:06.439 ************************************ 00:18:06.439 END TEST bdev_json_nonenclosed 00:18:06.439 ************************************ 00:18:06.439 17:52:33 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:06.439 17:52:33 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:18:06.439 17:52:33 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:06.439 17:52:33 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:06.439 ************************************ 00:18:06.439 START TEST bdev_json_nonarray 00:18:06.439 ************************************ 00:18:06.439 17:52:33 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:06.698 [2024-11-20 17:52:33.632980] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:06.698 [2024-11-20 17:52:33.633103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74886 ] 00:18:06.698 [2024-11-20 17:52:33.813324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.957 [2024-11-20 17:52:33.919366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.957 [2024-11-20 17:52:33.919485] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:06.957 [2024-11-20 17:52:33.919508] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:06.957 [2024-11-20 17:52:33.919520] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:07.216 00:18:07.216 real 0m0.629s 00:18:07.216 user 0m0.392s 00:18:07.216 sys 0m0.132s 00:18:07.216 17:52:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.216 17:52:34 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:07.216 ************************************ 00:18:07.216 END TEST bdev_json_nonarray 00:18:07.216 ************************************ 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:18:07.216 17:52:34 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:07.857 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:18:13.139 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.139 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.139 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.140 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:18:13.140 00:18:13.140 real 1m0.578s 00:18:13.140 user 1m36.642s 00:18:13.140 sys 0m32.861s 00:18:13.140 17:52:39 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:13.140 ************************************ 00:18:13.140 END TEST blockdev_xnvme 00:18:13.140 ************************************ 00:18:13.140 17:52:39 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:18:13.140 17:52:39 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:13.140 17:52:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.140 17:52:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.140 17:52:39 -- common/autotest_common.sh@10 -- # set +x 00:18:13.140 ************************************ 00:18:13.140 START TEST ublk 00:18:13.140 ************************************ 00:18:13.140 17:52:39 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:18:13.140 * Looking for test storage... 00:18:13.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:13.140 17:52:39 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:13.140 17:52:39 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:13.140 17:52:40 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:13.140 17:52:40 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:18:13.140 17:52:40 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:18:13.140 17:52:40 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:18:13.140 17:52:40 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:13.140 17:52:40 ublk -- scripts/common.sh@344 -- # case "$op" in 00:18:13.140 17:52:40 ublk -- scripts/common.sh@345 -- # : 1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:13.140 17:52:40 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:13.140 17:52:40 ublk -- scripts/common.sh@365 -- # decimal 1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@353 -- # local d=1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:13.140 17:52:40 ublk -- scripts/common.sh@355 -- # echo 1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:18:13.140 17:52:40 ublk -- scripts/common.sh@366 -- # decimal 2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@353 -- # local d=2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:13.140 17:52:40 ublk -- scripts/common.sh@355 -- # echo 2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:18:13.140 17:52:40 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:13.140 17:52:40 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:13.140 17:52:40 ublk -- scripts/common.sh@368 -- # return 0 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.140 --rc genhtml_branch_coverage=1 00:18:13.140 --rc genhtml_function_coverage=1 00:18:13.140 --rc genhtml_legend=1 00:18:13.140 --rc geninfo_all_blocks=1 00:18:13.140 --rc geninfo_unexecuted_blocks=1 00:18:13.140 00:18:13.140 ' 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.140 --rc genhtml_branch_coverage=1 00:18:13.140 --rc genhtml_function_coverage=1 00:18:13.140 --rc genhtml_legend=1 00:18:13.140 --rc geninfo_all_blocks=1 00:18:13.140 --rc geninfo_unexecuted_blocks=1 00:18:13.140 00:18:13.140 ' 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.140 --rc genhtml_branch_coverage=1 00:18:13.140 --rc genhtml_function_coverage=1 00:18:13.140 --rc genhtml_legend=1 00:18:13.140 --rc geninfo_all_blocks=1 00:18:13.140 --rc geninfo_unexecuted_blocks=1 00:18:13.140 00:18:13.140 ' 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:13.140 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:13.140 --rc genhtml_branch_coverage=1 00:18:13.140 --rc genhtml_function_coverage=1 00:18:13.140 --rc genhtml_legend=1 00:18:13.140 --rc geninfo_all_blocks=1 00:18:13.140 --rc geninfo_unexecuted_blocks=1 00:18:13.140 00:18:13.140 ' 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:13.140 17:52:40 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:13.140 17:52:40 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:13.140 17:52:40 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:13.140 17:52:40 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:13.140 17:52:40 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:13.140 17:52:40 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:13.140 17:52:40 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:13.140 17:52:40 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:18:13.140 17:52:40 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:13.140 17:52:40 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:13.140 ************************************ 00:18:13.140 START TEST test_save_ublk_config 00:18:13.140 ************************************ 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75178 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75178 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75178 ']' 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.140 17:52:40 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:13.140 [2024-11-20 17:52:40.224697] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:13.140 [2024-11-20 17:52:40.224829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75178 ] 00:18:13.399 [2024-11-20 17:52:40.387290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.399 [2024-11-20 17:52:40.499869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:14.334 [2024-11-20 17:52:41.378791] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:14.334 [2024-11-20 17:52:41.380008] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:14.334 malloc0 00:18:14.334 [2024-11-20 17:52:41.466920] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:14.334 [2024-11-20 17:52:41.467029] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:14.334 [2024-11-20 17:52:41.467043] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:14.334 [2024-11-20 17:52:41.467052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:14.334 [2024-11-20 17:52:41.475870] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:14.334 [2024-11-20 17:52:41.475896] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:14.334 [2024-11-20 17:52:41.482805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:14.334 [2024-11-20 17:52:41.482915] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:14.334 [2024-11-20 17:52:41.499798] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:14.334 0 00:18:14.334 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.592 17:52:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:18:14.592 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.592 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:14.851 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.851 17:52:41 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:18:14.851 "subsystems": [ 00:18:14.851 { 00:18:14.851 "subsystem": "fsdev", 00:18:14.851 "config": [ 00:18:14.851 { 00:18:14.851 "method": "fsdev_set_opts", 00:18:14.851 "params": { 00:18:14.851 "fsdev_io_pool_size": 65535, 00:18:14.851 "fsdev_io_cache_size": 256 00:18:14.851 } 00:18:14.851 } 00:18:14.851 ] 00:18:14.851 }, 00:18:14.851 { 00:18:14.851 "subsystem": "keyring", 00:18:14.851 "config": [] 00:18:14.851 }, 00:18:14.851 { 00:18:14.851 "subsystem": "iobuf", 00:18:14.851 "config": [ 00:18:14.851 { 00:18:14.851 "method": "iobuf_set_options", 00:18:14.851 "params": { 00:18:14.851 "small_pool_count": 8192, 00:18:14.851 "large_pool_count": 1024, 00:18:14.851 "small_bufsize": 8192, 00:18:14.852 "large_bufsize": 135168, 00:18:14.852 "enable_numa": false 00:18:14.852 } 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "sock", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "sock_set_default_impl", 00:18:14.852 "params": { 00:18:14.852 "impl_name": "posix" 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "sock_impl_set_options", 00:18:14.852 "params": { 00:18:14.852 "impl_name": "ssl", 00:18:14.852 "recv_buf_size": 4096, 00:18:14.852 "send_buf_size": 4096, 00:18:14.852 "enable_recv_pipe": true, 00:18:14.852 "enable_quickack": false, 00:18:14.852 "enable_placement_id": 0, 00:18:14.852 "enable_zerocopy_send_server": true, 00:18:14.852 "enable_zerocopy_send_client": false, 00:18:14.852 "zerocopy_threshold": 0, 00:18:14.852 "tls_version": 0, 00:18:14.852 "enable_ktls": false 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "sock_impl_set_options", 00:18:14.852 "params": { 00:18:14.852 "impl_name": "posix", 00:18:14.852 "recv_buf_size": 2097152, 00:18:14.852 "send_buf_size": 2097152, 00:18:14.852 "enable_recv_pipe": true, 00:18:14.852 "enable_quickack": false, 00:18:14.852 "enable_placement_id": 0, 00:18:14.852 "enable_zerocopy_send_server": true, 00:18:14.852 "enable_zerocopy_send_client": false, 00:18:14.852 "zerocopy_threshold": 0, 00:18:14.852 "tls_version": 0, 00:18:14.852 "enable_ktls": false 00:18:14.852 } 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "vmd", 00:18:14.852 "config": [] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "accel", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "accel_set_options", 00:18:14.852 "params": { 00:18:14.852 "small_cache_size": 128, 00:18:14.852 "large_cache_size": 16, 00:18:14.852 "task_count": 2048, 00:18:14.852 "sequence_count": 2048, 00:18:14.852 "buf_count": 2048 00:18:14.852 } 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "bdev", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "bdev_set_options", 00:18:14.852 "params": { 00:18:14.852 "bdev_io_pool_size": 65535, 00:18:14.852 "bdev_io_cache_size": 256, 00:18:14.852 "bdev_auto_examine": true, 00:18:14.852 "iobuf_small_cache_size": 128, 00:18:14.852 "iobuf_large_cache_size": 16 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_raid_set_options", 00:18:14.852 "params": { 00:18:14.852 "process_window_size_kb": 1024, 00:18:14.852 "process_max_bandwidth_mb_sec": 0 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_iscsi_set_options", 00:18:14.852 "params": { 00:18:14.852 "timeout_sec": 30 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_nvme_set_options", 00:18:14.852 "params": { 00:18:14.852 "action_on_timeout": "none", 00:18:14.852 "timeout_us": 0, 00:18:14.852 "timeout_admin_us": 0, 00:18:14.852 "keep_alive_timeout_ms": 10000, 00:18:14.852 "arbitration_burst": 0, 00:18:14.852 "low_priority_weight": 0, 00:18:14.852 "medium_priority_weight": 0, 00:18:14.852 "high_priority_weight": 0, 00:18:14.852 "nvme_adminq_poll_period_us": 10000, 00:18:14.852 "nvme_ioq_poll_period_us": 0, 00:18:14.852 "io_queue_requests": 0, 00:18:14.852 "delay_cmd_submit": true, 00:18:14.852 "transport_retry_count": 4, 00:18:14.852 "bdev_retry_count": 3, 00:18:14.852 "transport_ack_timeout": 0, 00:18:14.852 "ctrlr_loss_timeout_sec": 0, 00:18:14.852 "reconnect_delay_sec": 0, 00:18:14.852 "fast_io_fail_timeout_sec": 0, 00:18:14.852 "disable_auto_failback": false, 00:18:14.852 "generate_uuids": false, 00:18:14.852 "transport_tos": 0, 00:18:14.852 "nvme_error_stat": false, 00:18:14.852 "rdma_srq_size": 0, 00:18:14.852 "io_path_stat": false, 00:18:14.852 "allow_accel_sequence": false, 00:18:14.852 "rdma_max_cq_size": 0, 00:18:14.852 "rdma_cm_event_timeout_ms": 0, 00:18:14.852 "dhchap_digests": [ 00:18:14.852 "sha256", 00:18:14.852 "sha384", 00:18:14.852 "sha512" 00:18:14.852 ], 00:18:14.852 "dhchap_dhgroups": [ 00:18:14.852 "null", 00:18:14.852 "ffdhe2048", 00:18:14.852 "ffdhe3072", 00:18:14.852 "ffdhe4096", 00:18:14.852 "ffdhe6144", 00:18:14.852 "ffdhe8192" 00:18:14.852 ] 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_nvme_set_hotplug", 00:18:14.852 "params": { 00:18:14.852 "period_us": 100000, 00:18:14.852 "enable": false 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_malloc_create", 00:18:14.852 "params": { 00:18:14.852 "name": "malloc0", 00:18:14.852 "num_blocks": 8192, 00:18:14.852 "block_size": 4096, 00:18:14.852 "physical_block_size": 4096, 00:18:14.852 "uuid": "26c7f4e6-d950-4220-9421-fb2eb116bef0", 00:18:14.852 "optimal_io_boundary": 0, 00:18:14.852 "md_size": 0, 00:18:14.852 "dif_type": 0, 00:18:14.852 "dif_is_head_of_md": false, 00:18:14.852 "dif_pi_format": 0 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "bdev_wait_for_examine" 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "scsi", 00:18:14.852 "config": null 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "scheduler", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "framework_set_scheduler", 00:18:14.852 "params": { 00:18:14.852 "name": "static" 00:18:14.852 } 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "vhost_scsi", 00:18:14.852 "config": [] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "vhost_blk", 00:18:14.852 "config": [] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "ublk", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "ublk_create_target", 00:18:14.852 "params": { 00:18:14.852 "cpumask": "1" 00:18:14.852 } 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "method": "ublk_start_disk", 00:18:14.852 "params": { 00:18:14.852 "bdev_name": "malloc0", 00:18:14.852 "ublk_id": 0, 00:18:14.852 "num_queues": 1, 00:18:14.852 "queue_depth": 128 00:18:14.852 } 00:18:14.852 } 00:18:14.852 ] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "nbd", 00:18:14.852 "config": [] 00:18:14.852 }, 00:18:14.852 { 00:18:14.852 "subsystem": "nvmf", 00:18:14.852 "config": [ 00:18:14.852 { 00:18:14.852 "method": "nvmf_set_config", 00:18:14.852 "params": { 00:18:14.852 "discovery_filter": "match_any", 00:18:14.852 "admin_cmd_passthru": { 00:18:14.852 "identify_ctrlr": false 00:18:14.852 }, 00:18:14.852 "dhchap_digests": [ 00:18:14.852 "sha256", 00:18:14.852 "sha384", 00:18:14.852 "sha512" 00:18:14.852 ], 00:18:14.852 "dhchap_dhgroups": [ 00:18:14.852 "null", 00:18:14.852 "ffdhe2048", 00:18:14.852 "ffdhe3072", 00:18:14.853 "ffdhe4096", 00:18:14.853 "ffdhe6144", 00:18:14.853 "ffdhe8192" 00:18:14.853 ] 00:18:14.853 } 00:18:14.853 }, 00:18:14.853 { 00:18:14.853 "method": "nvmf_set_max_subsystems", 00:18:14.853 "params": { 00:18:14.853 "max_subsystems": 1024 00:18:14.853 } 00:18:14.853 }, 00:18:14.853 { 00:18:14.853 "method": "nvmf_set_crdt", 00:18:14.853 "params": { 00:18:14.853 "crdt1": 0, 00:18:14.853 "crdt2": 0, 00:18:14.853 "crdt3": 0 00:18:14.853 } 00:18:14.853 } 00:18:14.853 ] 00:18:14.853 }, 00:18:14.853 { 00:18:14.853 "subsystem": "iscsi", 00:18:14.853 "config": [ 00:18:14.853 { 00:18:14.853 "method": "iscsi_set_options", 00:18:14.853 "params": { 00:18:14.853 "node_base": "iqn.2016-06.io.spdk", 00:18:14.853 "max_sessions": 128, 00:18:14.853 "max_connections_per_session": 2, 00:18:14.853 "max_queue_depth": 64, 00:18:14.853 "default_time2wait": 2, 00:18:14.853 "default_time2retain": 20, 00:18:14.853 "first_burst_length": 8192, 00:18:14.853 "immediate_data": true, 00:18:14.853 "allow_duplicated_isid": false, 00:18:14.853 "error_recovery_level": 0, 00:18:14.853 "nop_timeout": 60, 00:18:14.853 "nop_in_interval": 30, 00:18:14.853 "disable_chap": false, 00:18:14.853 "require_chap": false, 00:18:14.853 "mutual_chap": false, 00:18:14.853 "chap_group": 0, 00:18:14.853 "max_large_datain_per_connection": 64, 00:18:14.853 "max_r2t_per_connection": 4, 00:18:14.853 "pdu_pool_size": 36864, 00:18:14.853 "immediate_data_pool_size": 16384, 00:18:14.853 "data_out_pool_size": 2048 00:18:14.853 } 00:18:14.853 } 00:18:14.853 ] 00:18:14.853 } 00:18:14.853 ] 00:18:14.853 }' 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75178 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75178 ']' 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75178 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75178 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.853 killing process with pid 75178 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75178' 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75178 00:18:14.853 17:52:41 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75178 00:18:16.231 [2024-11-20 17:52:43.301269] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:16.231 [2024-11-20 17:52:43.335830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:16.231 [2024-11-20 17:52:43.335966] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:16.231 [2024-11-20 17:52:43.351797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:16.231 [2024-11-20 17:52:43.351859] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:16.231 [2024-11-20 17:52:43.351877] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:16.231 [2024-11-20 17:52:43.351903] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:16.231 [2024-11-20 17:52:43.352056] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75248 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75248 00:18:18.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75248 ']' 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:18:18.134 17:52:45 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:18:18.134 "subsystems": [ 00:18:18.134 { 00:18:18.134 "subsystem": "fsdev", 00:18:18.134 "config": [ 00:18:18.134 { 00:18:18.134 "method": "fsdev_set_opts", 00:18:18.134 "params": { 00:18:18.134 "fsdev_io_pool_size": 65535, 00:18:18.134 "fsdev_io_cache_size": 256 00:18:18.134 } 00:18:18.134 } 00:18:18.134 ] 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "subsystem": "keyring", 00:18:18.134 "config": [] 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "subsystem": "iobuf", 00:18:18.134 "config": [ 00:18:18.134 { 00:18:18.134 "method": "iobuf_set_options", 00:18:18.134 "params": { 00:18:18.134 "small_pool_count": 8192, 00:18:18.134 "large_pool_count": 1024, 00:18:18.134 "small_bufsize": 8192, 00:18:18.134 "large_bufsize": 135168, 00:18:18.134 "enable_numa": false 00:18:18.134 } 00:18:18.134 } 00:18:18.134 ] 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "subsystem": "sock", 00:18:18.134 "config": [ 00:18:18.134 { 00:18:18.134 "method": "sock_set_default_impl", 00:18:18.134 "params": { 00:18:18.134 "impl_name": "posix" 00:18:18.134 } 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "method": "sock_impl_set_options", 00:18:18.134 "params": { 00:18:18.134 "impl_name": "ssl", 00:18:18.134 "recv_buf_size": 4096, 00:18:18.134 "send_buf_size": 4096, 00:18:18.134 "enable_recv_pipe": true, 00:18:18.134 "enable_quickack": false, 00:18:18.134 "enable_placement_id": 0, 00:18:18.134 "enable_zerocopy_send_server": true, 00:18:18.134 "enable_zerocopy_send_client": false, 00:18:18.134 "zerocopy_threshold": 0, 00:18:18.134 "tls_version": 0, 00:18:18.134 "enable_ktls": false 00:18:18.134 } 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "method": "sock_impl_set_options", 00:18:18.134 "params": { 00:18:18.134 "impl_name": "posix", 00:18:18.134 "recv_buf_size": 2097152, 00:18:18.134 "send_buf_size": 2097152, 00:18:18.134 "enable_recv_pipe": true, 00:18:18.134 "enable_quickack": false, 00:18:18.134 "enable_placement_id": 0, 00:18:18.134 "enable_zerocopy_send_server": true, 00:18:18.134 "enable_zerocopy_send_client": false, 00:18:18.134 "zerocopy_threshold": 0, 00:18:18.134 "tls_version": 0, 00:18:18.134 "enable_ktls": false 00:18:18.134 } 00:18:18.134 } 00:18:18.134 ] 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "subsystem": "vmd", 00:18:18.134 "config": [] 00:18:18.134 }, 00:18:18.134 { 00:18:18.134 "subsystem": "accel", 00:18:18.134 "config": [ 00:18:18.134 { 00:18:18.134 "method": "accel_set_options", 00:18:18.134 "params": { 00:18:18.135 "small_cache_size": 128, 00:18:18.135 "large_cache_size": 16, 00:18:18.135 "task_count": 2048, 00:18:18.135 "sequence_count": 2048, 00:18:18.135 "buf_count": 2048 00:18:18.135 } 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "bdev", 00:18:18.135 "config": [ 00:18:18.135 { 00:18:18.135 "method": "bdev_set_options", 00:18:18.135 "params": { 00:18:18.135 "bdev_io_pool_size": 65535, 00:18:18.135 "bdev_io_cache_size": 256, 00:18:18.135 "bdev_auto_examine": true, 00:18:18.135 "iobuf_small_cache_size": 128, 00:18:18.135 "iobuf_large_cache_size": 16 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_raid_set_options", 00:18:18.135 "params": { 00:18:18.135 "process_window_size_kb": 1024, 00:18:18.135 "process_max_bandwidth_mb_sec": 0 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_iscsi_set_options", 00:18:18.135 "params": { 00:18:18.135 "timeout_sec": 30 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_nvme_set_options", 00:18:18.135 "params": { 00:18:18.135 "action_on_timeout": "none", 00:18:18.135 "timeout_us": 0, 00:18:18.135 "timeout_admin_us": 0, 00:18:18.135 "keep_alive_timeout_ms": 10000, 00:18:18.135 "arbitration_burst": 0, 00:18:18.135 "low_priority_weight": 0, 00:18:18.135 "medium_priority_weight": 0, 00:18:18.135 "high_priority_weight": 0, 00:18:18.135 "nvme_adminq_poll_period_us": 10000, 00:18:18.135 "nvme_ioq_poll_period_us": 0, 00:18:18.135 "io_queue_requests": 0, 00:18:18.135 "delay_cmd_submit": true, 00:18:18.135 "transport_retry_count": 4, 00:18:18.135 "bdev_retry_count": 3, 00:18:18.135 "transport_ack_timeout": 0, 00:18:18.135 "ctrlr_loss_timeout_sec": 0, 00:18:18.135 "reconnect_delay_sec": 0, 00:18:18.135 "fast_io_fail_timeout_sec": 0, 00:18:18.135 "disable_auto_failback": false, 00:18:18.135 "generate_uuids": false, 00:18:18.135 "transport_tos": 0, 00:18:18.135 "nvme_error_stat": false, 00:18:18.135 "rdma_srq_size": 0, 00:18:18.135 "io_path_stat": false, 00:18:18.135 "allow_accel_sequence": false, 00:18:18.135 "rdma_max_cq_size": 0, 00:18:18.135 "rdma_cm_event_timeout_ms": 0, 00:18:18.135 "dhchap_digests": [ 00:18:18.135 "sha256", 00:18:18.135 "sha384", 00:18:18.135 "sha512" 00:18:18.135 ], 00:18:18.135 "dhchap_dhgroups": [ 00:18:18.135 "null", 00:18:18.135 "ffdhe2048", 00:18:18.135 "ffdhe3072", 00:18:18.135 "ffdhe4096", 00:18:18.135 "ffdhe6144", 00:18:18.135 "ffdhe8192" 00:18:18.135 ] 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_nvme_set_hotplug", 00:18:18.135 "params": { 00:18:18.135 "period_us": 100000, 00:18:18.135 "enable": false 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_malloc_create", 00:18:18.135 "params": { 00:18:18.135 "name": "malloc0", 00:18:18.135 "num_blocks": 8192, 00:18:18.135 "block_size": 4096, 00:18:18.135 "physical_block_size": 4096, 00:18:18.135 "uuid": "26c7f4e6-d950-4220-9421-fb2eb116bef0", 00:18:18.135 "optimal_io_boundary": 0, 00:18:18.135 "md_size": 0, 00:18:18.135 "dif_type": 0, 00:18:18.135 "dif_is_head_of_md": false, 00:18:18.135 "dif_pi_format": 0 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "bdev_wait_for_examine" 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "scsi", 00:18:18.135 "config": null 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "scheduler", 00:18:18.135 "config": [ 00:18:18.135 { 00:18:18.135 "method": "framework_set_scheduler", 00:18:18.135 "params": { 00:18:18.135 "name": "static" 00:18:18.135 } 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "vhost_scsi", 00:18:18.135 "config": [] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "vhost_blk", 00:18:18.135 "config": [] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "ublk", 00:18:18.135 "config": [ 00:18:18.135 { 00:18:18.135 "method": "ublk_create_target", 00:18:18.135 "params": { 00:18:18.135 "cpumask": "1" 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "ublk_start_disk", 00:18:18.135 "params": { 00:18:18.135 "bdev_name": "malloc0", 00:18:18.135 "ublk_id": 0, 00:18:18.135 "num_queues": 1, 00:18:18.135 "queue_depth": 128 00:18:18.135 } 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "nbd", 00:18:18.135 "config": [] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "nvmf", 00:18:18.135 "config": [ 00:18:18.135 { 00:18:18.135 "method": "nvmf_set_config", 00:18:18.135 "params": { 00:18:18.135 "discovery_filter": "match_any", 00:18:18.135 "admin_cmd_passthru": { 00:18:18.135 "identify_ctrlr": false 00:18:18.135 }, 00:18:18.135 "dhchap_digests": [ 00:18:18.135 "sha256", 00:18:18.135 "sha384", 00:18:18.135 "sha512" 00:18:18.135 ], 00:18:18.135 "dhchap_dhgroups": [ 00:18:18.135 "null", 00:18:18.135 "ffdhe2048", 00:18:18.135 "ffdhe3072", 00:18:18.135 "ffdhe4096", 00:18:18.135 "ffdhe6144", 00:18:18.135 "ffdhe8192" 00:18:18.135 ] 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "nvmf_set_max_subsystems", 00:18:18.135 "params": { 00:18:18.135 "max_subsystems": 1024 00:18:18.135 } 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "method": "nvmf_set_crdt", 00:18:18.135 "params": { 00:18:18.135 "crdt1": 0, 00:18:18.135 "crdt2": 0, 00:18:18.135 "crdt3": 0 00:18:18.135 } 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }, 00:18:18.135 { 00:18:18.135 "subsystem": "iscsi", 00:18:18.135 "config": [ 00:18:18.135 { 00:18:18.135 "method": "iscsi_set_options", 00:18:18.135 "params": { 00:18:18.135 "node_base": "iqn.2016-06.io.spdk", 00:18:18.135 "max_sessions": 128, 00:18:18.135 "max_connections_per_session": 2, 00:18:18.135 "max_queue_depth": 64, 00:18:18.135 "default_time2wait": 2, 00:18:18.135 "default_time2retain": 20, 00:18:18.135 "first_burst_length": 8192, 00:18:18.135 "immediate_data": true, 00:18:18.135 "allow_duplicated_isid": false, 00:18:18.135 "error_recovery_level": 0, 00:18:18.135 "nop_timeout": 60, 00:18:18.135 "nop_in_interval": 30, 00:18:18.135 "disable_chap": false, 00:18:18.135 "require_chap": false, 00:18:18.135 "mutual_chap": false, 00:18:18.135 "chap_group": 0, 00:18:18.135 "max_large_datain_per_connection": 64, 00:18:18.135 "max_r2t_per_connection": 4, 00:18:18.135 "pdu_pool_size": 36864, 00:18:18.135 "immediate_data_pool_size": 16384, 00:18:18.135 "data_out_pool_size": 2048 00:18:18.135 } 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 } 00:18:18.135 ] 00:18:18.135 }' 00:18:18.394 [2024-11-20 17:52:45.324750] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:18.394 [2024-11-20 17:52:45.324891] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75248 ] 00:18:18.394 [2024-11-20 17:52:45.487365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.652 [2024-11-20 17:52:45.597271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.587 [2024-11-20 17:52:46.686786] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:19.587 [2024-11-20 17:52:46.688062] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:19.587 [2024-11-20 17:52:46.694921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:18:19.587 [2024-11-20 17:52:46.695007] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:18:19.587 [2024-11-20 17:52:46.695021] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:19.587 [2024-11-20 17:52:46.695029] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:19.587 [2024-11-20 17:52:46.703872] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:19.587 [2024-11-20 17:52:46.703901] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:19.587 [2024-11-20 17:52:46.710801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:19.587 [2024-11-20 17:52:46.710917] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:19.587 [2024-11-20 17:52:46.726819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:19.900 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75248 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75248 ']' 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75248 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75248 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:19.901 killing process with pid 75248 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75248' 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75248 00:18:19.901 17:52:46 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75248 00:18:21.275 [2024-11-20 17:52:48.397832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:21.275 [2024-11-20 17:52:48.434825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:21.275 [2024-11-20 17:52:48.434958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:21.275 [2024-11-20 17:52:48.442804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:21.275 [2024-11-20 17:52:48.442854] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:21.275 [2024-11-20 17:52:48.442864] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:21.275 [2024-11-20 17:52:48.442891] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:21.275 [2024-11-20 17:52:48.443043] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:23.807 17:52:50 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:18:23.807 00:18:23.807 real 0m10.323s 00:18:23.807 user 0m7.808s 00:18:23.807 sys 0m3.206s 00:18:23.807 17:52:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:23.807 17:52:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:18:23.807 ************************************ 00:18:23.807 END TEST test_save_ublk_config 00:18:23.807 ************************************ 00:18:23.807 17:52:50 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75337 00:18:23.807 17:52:50 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:23.807 17:52:50 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:23.807 17:52:50 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75337 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@835 -- # '[' -z 75337 ']' 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:23.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:23.807 17:52:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:23.807 [2024-11-20 17:52:50.592623] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:23.807 [2024-11-20 17:52:50.592747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75337 ] 00:18:23.807 [2024-11-20 17:52:50.774903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:23.807 [2024-11-20 17:52:50.890861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.807 [2024-11-20 17:52:50.890896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.743 17:52:51 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.743 17:52:51 ublk -- common/autotest_common.sh@868 -- # return 0 00:18:24.743 17:52:51 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:18:24.743 17:52:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:24.743 17:52:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:24.743 17:52:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.743 ************************************ 00:18:24.743 START TEST test_create_ublk 00:18:24.743 ************************************ 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:18:24.743 17:52:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:24.743 [2024-11-20 17:52:51.832791] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:24.743 [2024-11-20 17:52:51.835469] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.743 17:52:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:18:24.743 17:52:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.743 17:52:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.002 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.002 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:18:25.002 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:25.002 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.002 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.002 [2024-11-20 17:52:52.128947] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:25.002 [2024-11-20 17:52:52.129410] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:25.002 [2024-11-20 17:52:52.129433] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:25.002 [2024-11-20 17:52:52.129442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:25.002 [2024-11-20 17:52:52.140797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:25.002 [2024-11-20 17:52:52.140818] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:25.002 [2024-11-20 17:52:52.151802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:25.002 [2024-11-20 17:52:52.152364] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:25.002 [2024-11-20 17:52:52.170794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:25.261 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.261 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:18:25.261 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:18:25.261 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:18:25.261 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.261 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:25.261 17:52:52 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.261 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:18:25.261 { 00:18:25.261 "ublk_device": "/dev/ublkb0", 00:18:25.262 "id": 0, 00:18:25.262 "queue_depth": 512, 00:18:25.262 "num_queues": 4, 00:18:25.262 "bdev_name": "Malloc0" 00:18:25.262 } 00:18:25.262 ]' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:18:25.262 17:52:52 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:18:25.520 fio: verification read phase will never start because write phase uses all of runtime 00:18:25.520 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:18:25.520 fio-3.35 00:18:25.520 Starting 1 process 00:18:35.524 00:18:35.524 fio_test: (groupid=0, jobs=1): err= 0: pid=75389: Wed Nov 20 17:53:02 2024 00:18:35.524 write: IOPS=16.6k, BW=64.7MiB/s (67.8MB/s)(647MiB/10001msec); 0 zone resets 00:18:35.524 clat (usec): min=38, max=4168, avg=59.64, stdev=98.72 00:18:35.524 lat (usec): min=38, max=4184, avg=60.08, stdev=98.73 00:18:35.524 clat percentiles (usec): 00:18:35.524 | 1.00th=[ 40], 5.00th=[ 51], 10.00th=[ 52], 20.00th=[ 53], 00:18:35.524 | 30.00th=[ 55], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 57], 00:18:35.524 | 70.00th=[ 58], 80.00th=[ 59], 90.00th=[ 61], 95.00th=[ 64], 00:18:35.524 | 99.00th=[ 73], 99.50th=[ 81], 99.90th=[ 2073], 99.95th=[ 2802], 00:18:35.524 | 99.99th=[ 3490] 00:18:35.524 bw ( KiB/s): min=64968, max=76279, per=100.00%, avg=66455.95, stdev=2441.01, samples=19 00:18:35.524 iops : min=16242, max=19069, avg=16613.95, stdev=610.08, samples=19 00:18:35.524 lat (usec) : 50=4.25%, 100=95.41%, 250=0.14%, 500=0.02%, 750=0.01% 00:18:35.524 lat (usec) : 1000=0.01% 00:18:35.524 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:18:35.524 cpu : usr=3.24%, sys=9.58%, ctx=165570, majf=0, minf=795 00:18:35.524 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:35.524 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.524 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.524 issued rwts: total=0,165570,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.524 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:35.524 00:18:35.524 Run status group 0 (all jobs): 00:18:35.524 WRITE: bw=64.7MiB/s (67.8MB/s), 64.7MiB/s-64.7MiB/s (67.8MB/s-67.8MB/s), io=647MiB (678MB), run=10001-10001msec 00:18:35.524 00:18:35.524 Disk stats (read/write): 00:18:35.524 ublkb0: ios=0/163960, merge=0/0, ticks=0/8660, in_queue=8661, util=99.11% 00:18:35.524 17:53:02 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:18:35.524 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.524 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.524 [2024-11-20 17:53:02.671357] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:35.783 [2024-11-20 17:53:02.706224] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:35.783 [2024-11-20 17:53:02.707108] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:35.783 [2024-11-20 17:53:02.713809] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:35.783 [2024-11-20 17:53:02.714082] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:35.783 [2024-11-20 17:53:02.714098] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:35.783 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.783 17:53:02 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:18:35.783 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.784 [2024-11-20 17:53:02.737872] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:18:35.784 request: 00:18:35.784 { 00:18:35.784 "ublk_id": 0, 00:18:35.784 "method": "ublk_stop_disk", 00:18:35.784 "req_id": 1 00:18:35.784 } 00:18:35.784 Got JSON-RPC error response 00:18:35.784 response: 00:18:35.784 { 00:18:35.784 "code": -19, 00:18:35.784 "message": "No such device" 00:18:35.784 } 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:35.784 17:53:02 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:35.784 [2024-11-20 17:53:02.761877] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:35.784 [2024-11-20 17:53:02.769788] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:35.784 [2024-11-20 17:53:02.769828] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.784 17:53:02 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.784 17:53:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.353 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.353 17:53:03 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:18:36.353 17:53:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:36.353 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.353 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.353 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.353 17:53:03 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:36.353 17:53:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:18:36.613 17:53:03 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:36.613 17:53:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:36.613 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.613 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.613 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.613 17:53:03 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:36.613 17:53:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:18:36.613 17:53:03 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:36.613 00:18:36.613 real 0m11.810s 00:18:36.613 user 0m0.701s 00:18:36.613 sys 0m1.090s 00:18:36.613 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:36.613 17:53:03 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.613 ************************************ 00:18:36.613 END TEST test_create_ublk 00:18:36.613 ************************************ 00:18:36.613 17:53:03 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:18:36.613 17:53:03 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:36.613 17:53:03 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:36.613 17:53:03 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.613 ************************************ 00:18:36.613 START TEST test_create_multi_ublk 00:18:36.613 ************************************ 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.613 [2024-11-20 17:53:03.714785] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:36.613 [2024-11-20 17:53:03.717338] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.613 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.872 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.872 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:18:36.872 17:53:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:18:36.872 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.872 17:53:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:36.872 [2024-11-20 17:53:04.002938] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:18:36.872 [2024-11-20 17:53:04.003394] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:18:36.872 [2024-11-20 17:53:04.003412] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:18:36.872 [2024-11-20 17:53:04.003426] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:18:36.872 [2024-11-20 17:53:04.012066] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:36.872 [2024-11-20 17:53:04.012094] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:36.872 [2024-11-20 17:53:04.018796] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:36.872 [2024-11-20 17:53:04.019365] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:18:36.872 [2024-11-20 17:53:04.028234] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.872 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.441 [2024-11-20 17:53:04.329932] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:18:37.441 [2024-11-20 17:53:04.330367] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:18:37.441 [2024-11-20 17:53:04.330387] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:37.441 [2024-11-20 17:53:04.330395] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:37.441 [2024-11-20 17:53:04.337824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:37.441 [2024-11-20 17:53:04.337847] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:37.441 [2024-11-20 17:53:04.345812] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:37.441 [2024-11-20 17:53:04.346363] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:37.441 [2024-11-20 17:53:04.354836] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.441 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.699 [2024-11-20 17:53:04.673919] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:18:37.699 [2024-11-20 17:53:04.674361] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:18:37.699 [2024-11-20 17:53:04.674378] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:18:37.699 [2024-11-20 17:53:04.674389] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:18:37.699 [2024-11-20 17:53:04.681820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:37.699 [2024-11-20 17:53:04.681848] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:37.699 [2024-11-20 17:53:04.689802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:37.699 [2024-11-20 17:53:04.690367] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:18:37.699 [2024-11-20 17:53:04.698832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.699 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.958 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.958 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:18:37.958 17:53:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:18:37.959 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.959 17:53:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.959 [2024-11-20 17:53:04.993945] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:18:37.959 [2024-11-20 17:53:04.994375] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:18:37.959 [2024-11-20 17:53:04.994394] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:18:37.959 [2024-11-20 17:53:04.994402] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:18:37.959 [2024-11-20 17:53:05.001829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:37.959 [2024-11-20 17:53:05.001853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:37.959 [2024-11-20 17:53:05.009803] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:37.959 [2024-11-20 17:53:05.010361] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:18:37.959 [2024-11-20 17:53:05.018806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:18:37.959 { 00:18:37.959 "ublk_device": "/dev/ublkb0", 00:18:37.959 "id": 0, 00:18:37.959 "queue_depth": 512, 00:18:37.959 "num_queues": 4, 00:18:37.959 "bdev_name": "Malloc0" 00:18:37.959 }, 00:18:37.959 { 00:18:37.959 "ublk_device": "/dev/ublkb1", 00:18:37.959 "id": 1, 00:18:37.959 "queue_depth": 512, 00:18:37.959 "num_queues": 4, 00:18:37.959 "bdev_name": "Malloc1" 00:18:37.959 }, 00:18:37.959 { 00:18:37.959 "ublk_device": "/dev/ublkb2", 00:18:37.959 "id": 2, 00:18:37.959 "queue_depth": 512, 00:18:37.959 "num_queues": 4, 00:18:37.959 "bdev_name": "Malloc2" 00:18:37.959 }, 00:18:37.959 { 00:18:37.959 "ublk_device": "/dev/ublkb3", 00:18:37.959 "id": 3, 00:18:37.959 "queue_depth": 512, 00:18:37.959 "num_queues": 4, 00:18:37.959 "bdev_name": "Malloc3" 00:18:37.959 } 00:18:37.959 ]' 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:18:37.959 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:18:38.218 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:38.477 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:18:38.737 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.996 [2024-11-20 17:53:05.913936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:18:38.996 [2024-11-20 17:53:05.953845] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:38.996 [2024-11-20 17:53:05.954755] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:18:38.996 [2024-11-20 17:53:05.961827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:38.996 [2024-11-20 17:53:05.962123] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:18:38.996 [2024-11-20 17:53:05.962138] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.996 17:53:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.996 [2024-11-20 17:53:05.977884] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:18:38.996 [2024-11-20 17:53:06.007213] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:38.996 [2024-11-20 17:53:06.008250] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:18:38.996 [2024-11-20 17:53:06.018844] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:38.996 [2024-11-20 17:53:06.019119] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:18:38.996 [2024-11-20 17:53:06.019137] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.996 [2024-11-20 17:53:06.030943] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:18:38.996 [2024-11-20 17:53:06.076843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:38.996 [2024-11-20 17:53:06.077648] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:18:38.996 [2024-11-20 17:53:06.084811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:38.996 [2024-11-20 17:53:06.085082] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:18:38.996 [2024-11-20 17:53:06.085095] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:18:38.996 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.997 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:38.997 [2024-11-20 17:53:06.100900] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:18:38.997 [2024-11-20 17:53:06.143828] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:18:38.997 [2024-11-20 17:53:06.144556] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:18:38.997 [2024-11-20 17:53:06.148859] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:18:38.997 [2024-11-20 17:53:06.149460] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:18:38.997 [2024-11-20 17:53:06.149480] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:18:38.997 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.997 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:18:39.256 [2024-11-20 17:53:06.340894] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:39.256 [2024-11-20 17:53:06.348798] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:39.256 [2024-11-20 17:53:06.348842] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:18:39.256 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:18:39.256 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:39.256 17:53:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:18:39.256 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.256 17:53:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:40.192 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.192 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.192 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:18:40.192 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.192 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:40.453 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.453 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.453 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:18:40.453 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.453 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:40.714 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.714 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:18:40.714 17:53:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:18:40.714 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.714 17:53:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:18:41.283 ************************************ 00:18:41.283 END TEST test_create_multi_ublk 00:18:41.283 ************************************ 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:18:41.283 00:18:41.283 real 0m4.626s 00:18:41.283 user 0m1.001s 00:18:41.283 sys 0m0.249s 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:41.283 17:53:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:18:41.283 17:53:08 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:18:41.283 17:53:08 ublk -- ublk/ublk.sh@147 -- # cleanup 00:18:41.283 17:53:08 ublk -- ublk/ublk.sh@130 -- # killprocess 75337 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@954 -- # '[' -z 75337 ']' 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@958 -- # kill -0 75337 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@959 -- # uname 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75337 00:18:41.283 killing process with pid 75337 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75337' 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@973 -- # kill 75337 00:18:41.283 17:53:08 ublk -- common/autotest_common.sh@978 -- # wait 75337 00:18:42.660 [2024-11-20 17:53:09.578283] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:18:42.660 [2024-11-20 17:53:09.578339] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:18:44.038 00:18:44.038 real 0m30.964s 00:18:44.038 user 0m44.311s 00:18:44.038 sys 0m10.345s 00:18:44.038 17:53:10 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.038 ************************************ 00:18:44.038 END TEST ublk 00:18:44.038 ************************************ 00:18:44.038 17:53:10 ublk -- common/autotest_common.sh@10 -- # set +x 00:18:44.038 17:53:10 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:44.038 17:53:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:18:44.038 17:53:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.038 17:53:10 -- common/autotest_common.sh@10 -- # set +x 00:18:44.038 ************************************ 00:18:44.038 START TEST ublk_recovery 00:18:44.038 ************************************ 00:18:44.038 17:53:10 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:18:44.038 * Looking for test storage... 00:18:44.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:18:44.038 17:53:11 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:44.038 17:53:11 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.039 17:53:11 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.039 --rc genhtml_branch_coverage=1 00:18:44.039 --rc genhtml_function_coverage=1 00:18:44.039 --rc genhtml_legend=1 00:18:44.039 --rc geninfo_all_blocks=1 00:18:44.039 --rc geninfo_unexecuted_blocks=1 00:18:44.039 00:18:44.039 ' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.039 --rc genhtml_branch_coverage=1 00:18:44.039 --rc genhtml_function_coverage=1 00:18:44.039 --rc genhtml_legend=1 00:18:44.039 --rc geninfo_all_blocks=1 00:18:44.039 --rc geninfo_unexecuted_blocks=1 00:18:44.039 00:18:44.039 ' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.039 --rc genhtml_branch_coverage=1 00:18:44.039 --rc genhtml_function_coverage=1 00:18:44.039 --rc genhtml_legend=1 00:18:44.039 --rc geninfo_all_blocks=1 00:18:44.039 --rc geninfo_unexecuted_blocks=1 00:18:44.039 00:18:44.039 ' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:44.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:44.039 --rc genhtml_branch_coverage=1 00:18:44.039 --rc genhtml_function_coverage=1 00:18:44.039 --rc genhtml_legend=1 00:18:44.039 --rc geninfo_all_blocks=1 00:18:44.039 --rc geninfo_unexecuted_blocks=1 00:18:44.039 00:18:44.039 ' 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:18:44.039 17:53:11 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75764 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:44.039 17:53:11 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75764 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75764 ']' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.039 17:53:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:44.298 [2024-11-20 17:53:11.267855] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:44.298 [2024-11-20 17:53:11.268647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75764 ] 00:18:44.298 [2024-11-20 17:53:11.442932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:44.557 [2024-11-20 17:53:11.561062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.557 [2024-11-20 17:53:11.561096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:45.494 17:53:12 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 [2024-11-20 17:53:12.449790] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:45.494 [2024-11-20 17:53:12.452671] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.494 17:53:12 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 malloc0 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.494 17:53:12 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:45.494 [2024-11-20 17:53:12.615939] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:18:45.494 [2024-11-20 17:53:12.616063] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:18:45.494 [2024-11-20 17:53:12.616078] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:45.494 [2024-11-20 17:53:12.616089] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:18:45.494 [2024-11-20 17:53:12.623820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:18:45.494 [2024-11-20 17:53:12.623845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:18:45.494 [2024-11-20 17:53:12.631805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:18:45.494 [2024-11-20 17:53:12.631949] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:18:45.494 [2024-11-20 17:53:12.653820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:18:45.494 1 00:18:45.494 17:53:12 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.494 17:53:12 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:18:46.872 17:53:13 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75805 00:18:46.872 17:53:13 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:18:46.872 17:53:13 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:18:46.872 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:46.872 fio-3.35 00:18:46.872 Starting 1 process 00:18:52.146 17:53:18 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75764 00:18:52.146 17:53:18 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:18:57.420 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75764 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:18:57.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:57.420 17:53:23 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75910 00:18:57.420 17:53:23 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:18:57.420 17:53:23 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:57.420 17:53:23 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75910 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75910 ']' 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:57.420 17:53:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.420 [2024-11-20 17:53:23.787886] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:18:57.420 [2024-11-20 17:53:23.788249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75910 ] 00:18:57.420 [2024-11-20 17:53:23.972266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:57.420 [2024-11-20 17:53:24.090222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:57.420 [2024-11-20 17:53:24.090256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:18:57.988 17:53:24 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.988 [2024-11-20 17:53:24.984790] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:18:57.988 [2024-11-20 17:53:24.987386] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.988 17:53:24 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.988 17:53:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.988 malloc0 00:18:57.988 17:53:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.988 17:53:25 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:18:57.988 17:53:25 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.988 17:53:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:18:57.988 [2024-11-20 17:53:25.142951] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:18:57.988 [2024-11-20 17:53:25.142995] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:18:57.988 [2024-11-20 17:53:25.143007] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:18:57.988 [2024-11-20 17:53:25.150825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:18:57.988 [2024-11-20 17:53:25.150854] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:18:57.988 [2024-11-20 17:53:25.150864] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:18:57.988 [2024-11-20 17:53:25.150962] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:18:57.988 1 00:18:57.988 17:53:25 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.988 17:53:25 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75805 00:18:57.988 [2024-11-20 17:53:25.158795] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:18:58.247 [2024-11-20 17:53:25.165436] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:18:58.247 [2024-11-20 17:53:25.172985] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:18:58.247 [2024-11-20 17:53:25.173012] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:19:54.482 00:19:54.482 fio_test: (groupid=0, jobs=1): err= 0: pid=75812: Wed Nov 20 17:54:13 2024 00:19:54.482 read: IOPS=21.7k, BW=84.6MiB/s (88.7MB/s)(5076MiB/60001msec) 00:19:54.482 slat (usec): min=2, max=389, avg= 7.54, stdev= 2.27 00:19:54.482 clat (usec): min=1093, max=6510.0k, avg=2932.48, stdev=47404.05 00:19:54.482 lat (usec): min=1101, max=6510.0k, avg=2940.02, stdev=47404.06 00:19:54.482 clat percentiles (usec): 00:19:54.482 | 1.00th=[ 1975], 5.00th=[ 2180], 10.00th=[ 2212], 20.00th=[ 2278], 00:19:54.482 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2409], 60.00th=[ 2474], 00:19:54.482 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 3097], 95.00th=[ 3884], 00:19:54.482 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 6456], 99.95th=[ 7373], 00:19:54.482 | 99.99th=[13173] 00:19:54.482 bw ( KiB/s): min=24536, max=105640, per=100.00%, avg=96407.66, stdev=11162.29, samples=107 00:19:54.482 iops : min= 6134, max=26410, avg=24101.91, stdev=2790.57, samples=107 00:19:54.482 write: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(5070MiB/60001msec); 0 zone resets 00:19:54.482 slat (usec): min=2, max=530, avg= 7.61, stdev= 2.37 00:19:54.482 clat (usec): min=961, max=6510.3k, avg=2964.47, stdev=43861.30 00:19:54.482 lat (usec): min=967, max=6510.3k, avg=2972.07, stdev=43861.30 00:19:54.482 clat percentiles (usec): 00:19:54.482 | 1.00th=[ 1975], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:19:54.482 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2507], 60.00th=[ 2573], 00:19:54.482 | 70.00th=[ 2704], 80.00th=[ 2802], 90.00th=[ 3163], 95.00th=[ 3884], 00:19:54.482 | 99.00th=[ 5080], 99.50th=[ 5538], 99.90th=[ 6652], 99.95th=[ 7504], 00:19:54.482 | 99.99th=[13304] 00:19:54.482 bw ( KiB/s): min=25208, max=105208, per=100.00%, avg=96308.22, stdev=11009.10, samples=107 00:19:54.482 iops : min= 6302, max=26302, avg=24077.04, stdev=2752.27, samples=107 00:19:54.482 lat (usec) : 1000=0.01% 00:19:54.482 lat (msec) : 2=1.26%, 4=94.29%, 10=4.44%, 20=0.01%, >=2000=0.01% 00:19:54.482 cpu : usr=11.71%, sys=33.27%, ctx=111517, majf=0, minf=14 00:19:54.482 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:54.482 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.482 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:54.482 issued rwts: total=1299525,1298000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.482 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:54.482 00:19:54.482 Run status group 0 (all jobs): 00:19:54.482 READ: bw=84.6MiB/s (88.7MB/s), 84.6MiB/s-84.6MiB/s (88.7MB/s-88.7MB/s), io=5076MiB (5323MB), run=60001-60001msec 00:19:54.482 WRITE: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=5070MiB (5317MB), run=60001-60001msec 00:19:54.482 00:19:54.482 Disk stats (read/write): 00:19:54.482 ublkb1: ios=1297087/1295617, merge=0/0, ticks=3690728/3595140, in_queue=7285869, util=99.95% 00:19:54.482 17:54:13 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:19:54.482 17:54:13 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.482 17:54:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.482 [2024-11-20 17:54:13.956650] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:19:54.482 [2024-11-20 17:54:13.987903] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:19:54.482 [2024-11-20 17:54:13.988176] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:19:54.482 [2024-11-20 17:54:13.995818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:19:54.482 [2024-11-20 17:54:13.995924] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:19:54.482 [2024-11-20 17:54:13.995938] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.482 17:54:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.482 [2024-11-20 17:54:14.009935] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:54.482 [2024-11-20 17:54:14.018793] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:54.482 [2024-11-20 17:54:14.018841] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.482 17:54:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:19:54.482 17:54:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:19:54.482 17:54:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75910 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75910 ']' 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75910 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75910 00:19:54.482 killing process with pid 75910 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75910' 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75910 00:19:54.482 17:54:14 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75910 00:19:54.482 [2024-11-20 17:54:16.159936] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:19:54.482 [2024-11-20 17:54:16.159991] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:19:54.482 ************************************ 00:19:54.482 END TEST ublk_recovery 00:19:54.482 ************************************ 00:19:54.482 00:19:54.482 real 1m6.668s 00:19:54.482 user 1m49.696s 00:19:54.482 sys 0m39.743s 00:19:54.482 17:54:17 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:54.482 17:54:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:19:54.482 17:54:17 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:19:54.482 17:54:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:54.482 17:54:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.482 17:54:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.482 17:54:17 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:19:54.482 17:54:17 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:54.482 17:54:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:19:54.482 17:54:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:54.482 17:54:17 -- common/autotest_common.sh@10 -- # set +x 00:19:54.482 ************************************ 00:19:54.482 START TEST ftl 00:19:54.482 ************************************ 00:19:54.482 17:54:17 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:54.482 * Looking for test storage... 00:19:54.482 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.482 17:54:17 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:54.482 17:54:17 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:19:54.482 17:54:17 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:54.482 17:54:17 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:54.482 17:54:17 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:54.482 17:54:17 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:54.482 17:54:17 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:54.482 17:54:17 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:19:54.482 17:54:17 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:19:54.482 17:54:17 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:19:54.482 17:54:17 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:19:54.482 17:54:17 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:19:54.482 17:54:17 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:19:54.482 17:54:17 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:54.483 17:54:17 ftl -- scripts/common.sh@344 -- # case "$op" in 00:19:54.483 17:54:17 ftl -- scripts/common.sh@345 -- # : 1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:54.483 17:54:17 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:54.483 17:54:17 ftl -- scripts/common.sh@365 -- # decimal 1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@353 -- # local d=1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:54.483 17:54:17 ftl -- scripts/common.sh@355 -- # echo 1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:19:54.483 17:54:17 ftl -- scripts/common.sh@366 -- # decimal 2 00:19:54.483 17:54:17 ftl -- scripts/common.sh@353 -- # local d=2 00:19:54.483 17:54:17 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:54.483 17:54:17 ftl -- scripts/common.sh@355 -- # echo 2 00:19:54.483 17:54:17 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:19:54.483 17:54:17 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:54.483 17:54:17 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:54.483 17:54:17 ftl -- scripts/common.sh@368 -- # return 0 00:19:54.483 17:54:17 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:54.483 17:54:17 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:54.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.483 --rc genhtml_branch_coverage=1 00:19:54.483 --rc genhtml_function_coverage=1 00:19:54.483 --rc genhtml_legend=1 00:19:54.483 --rc geninfo_all_blocks=1 00:19:54.483 --rc geninfo_unexecuted_blocks=1 00:19:54.483 00:19:54.483 ' 00:19:54.483 17:54:17 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:54.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.483 --rc genhtml_branch_coverage=1 00:19:54.483 --rc genhtml_function_coverage=1 00:19:54.483 --rc genhtml_legend=1 00:19:54.483 --rc geninfo_all_blocks=1 00:19:54.483 --rc geninfo_unexecuted_blocks=1 00:19:54.483 00:19:54.483 ' 00:19:54.483 17:54:17 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:54.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.483 --rc genhtml_branch_coverage=1 00:19:54.483 --rc genhtml_function_coverage=1 00:19:54.483 --rc genhtml_legend=1 00:19:54.483 --rc geninfo_all_blocks=1 00:19:54.483 --rc geninfo_unexecuted_blocks=1 00:19:54.483 00:19:54.483 ' 00:19:54.483 17:54:17 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:54.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:54.483 --rc genhtml_branch_coverage=1 00:19:54.483 --rc genhtml_function_coverage=1 00:19:54.483 --rc genhtml_legend=1 00:19:54.483 --rc geninfo_all_blocks=1 00:19:54.483 --rc geninfo_unexecuted_blocks=1 00:19:54.483 00:19:54.483 ' 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:54.483 17:54:17 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:19:54.483 17:54:17 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.483 17:54:17 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:54.483 17:54:17 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:54.483 17:54:17 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:54.483 17:54:17 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.483 17:54:17 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.483 17:54:17 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.483 17:54:17 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.483 17:54:17 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:54.483 17:54:17 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:54.483 17:54:17 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:54.483 17:54:17 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.483 17:54:17 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:54.483 17:54:17 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:54.483 17:54:17 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.483 17:54:17 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:54.483 17:54:17 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.483 17:54:17 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:54.483 17:54:17 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:54.483 17:54:17 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:54.483 17:54:17 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.483 17:54:17 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:19:54.483 17:54:17 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:54.483 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:54.483 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:54.483 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:54.483 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:54.483 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:54.483 17:54:18 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76727 00:19:54.483 17:54:18 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:54.483 17:54:18 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76727 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@835 -- # '[' -z 76727 ']' 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.483 17:54:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:54.483 [2024-11-20 17:54:18.947922] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:54.483 [2024-11-20 17:54:18.948251] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76727 ] 00:19:54.483 [2024-11-20 17:54:19.131905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.483 [2024-11-20 17:54:19.261840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.483 17:54:19 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.483 17:54:19 ftl -- common/autotest_common.sh@868 -- # return 0 00:19:54.483 17:54:19 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:19:54.483 17:54:19 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:19:54.483 17:54:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:19:54.483 17:54:21 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:54.483 17:54:21 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:19:54.483 17:54:21 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:54.483 17:54:21 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:54.741 17:54:21 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@50 -- # break 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:19:54.742 17:54:21 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:19:55.000 17:54:21 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:19:55.000 17:54:21 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:19:55.000 17:54:21 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:19:55.000 17:54:21 ftl -- ftl/ftl.sh@63 -- # break 00:19:55.000 17:54:21 ftl -- ftl/ftl.sh@66 -- # killprocess 76727 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@954 -- # '[' -z 76727 ']' 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@958 -- # kill -0 76727 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@959 -- # uname 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76727 00:19:55.000 killing process with pid 76727 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76727' 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@973 -- # kill 76727 00:19:55.000 17:54:21 ftl -- common/autotest_common.sh@978 -- # wait 76727 00:19:57.530 17:54:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:19:57.530 17:54:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:57.530 17:54:24 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:57.530 17:54:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:57.530 17:54:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:57.530 ************************************ 00:19:57.530 START TEST ftl_fio_basic 00:19:57.530 ************************************ 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:19:57.530 * Looking for test storage... 00:19:57.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:57.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.530 --rc genhtml_branch_coverage=1 00:19:57.530 --rc genhtml_function_coverage=1 00:19:57.530 --rc genhtml_legend=1 00:19:57.530 --rc geninfo_all_blocks=1 00:19:57.530 --rc geninfo_unexecuted_blocks=1 00:19:57.530 00:19:57.530 ' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:57.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.530 --rc genhtml_branch_coverage=1 00:19:57.530 --rc genhtml_function_coverage=1 00:19:57.530 --rc genhtml_legend=1 00:19:57.530 --rc geninfo_all_blocks=1 00:19:57.530 --rc geninfo_unexecuted_blocks=1 00:19:57.530 00:19:57.530 ' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:57.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.530 --rc genhtml_branch_coverage=1 00:19:57.530 --rc genhtml_function_coverage=1 00:19:57.530 --rc genhtml_legend=1 00:19:57.530 --rc geninfo_all_blocks=1 00:19:57.530 --rc geninfo_unexecuted_blocks=1 00:19:57.530 00:19:57.530 ' 00:19:57.530 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:57.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:57.530 --rc genhtml_branch_coverage=1 00:19:57.530 --rc genhtml_function_coverage=1 00:19:57.530 --rc genhtml_legend=1 00:19:57.531 --rc geninfo_all_blocks=1 00:19:57.531 --rc geninfo_unexecuted_blocks=1 00:19:57.531 00:19:57.531 ' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76870 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76870 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76870 ']' 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:57.531 17:54:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:57.790 [2024-11-20 17:54:24.730479] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:19:57.790 [2024-11-20 17:54:24.731103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76870 ] 00:19:57.790 [2024-11-20 17:54:24.908314] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.049 [2024-11-20 17:54:25.013488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.049 [2024-11-20 17:54:25.013655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.049 [2024-11-20 17:54:25.013689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:19:58.987 17:54:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:59.246 { 00:19:59.246 "name": "nvme0n1", 00:19:59.246 "aliases": [ 00:19:59.246 "c2b137a1-650c-45c9-beea-641d99ab743f" 00:19:59.246 ], 00:19:59.246 "product_name": "NVMe disk", 00:19:59.246 "block_size": 4096, 00:19:59.246 "num_blocks": 1310720, 00:19:59.246 "uuid": "c2b137a1-650c-45c9-beea-641d99ab743f", 00:19:59.246 "numa_id": -1, 00:19:59.246 "assigned_rate_limits": { 00:19:59.246 "rw_ios_per_sec": 0, 00:19:59.246 "rw_mbytes_per_sec": 0, 00:19:59.246 "r_mbytes_per_sec": 0, 00:19:59.246 "w_mbytes_per_sec": 0 00:19:59.246 }, 00:19:59.246 "claimed": false, 00:19:59.246 "zoned": false, 00:19:59.246 "supported_io_types": { 00:19:59.246 "read": true, 00:19:59.246 "write": true, 00:19:59.246 "unmap": true, 00:19:59.246 "flush": true, 00:19:59.246 "reset": true, 00:19:59.246 "nvme_admin": true, 00:19:59.246 "nvme_io": true, 00:19:59.246 "nvme_io_md": false, 00:19:59.246 "write_zeroes": true, 00:19:59.246 "zcopy": false, 00:19:59.246 "get_zone_info": false, 00:19:59.246 "zone_management": false, 00:19:59.246 "zone_append": false, 00:19:59.246 "compare": true, 00:19:59.246 "compare_and_write": false, 00:19:59.246 "abort": true, 00:19:59.246 "seek_hole": false, 00:19:59.246 "seek_data": false, 00:19:59.246 "copy": true, 00:19:59.246 "nvme_iov_md": false 00:19:59.246 }, 00:19:59.246 "driver_specific": { 00:19:59.246 "nvme": [ 00:19:59.246 { 00:19:59.246 "pci_address": "0000:00:11.0", 00:19:59.246 "trid": { 00:19:59.246 "trtype": "PCIe", 00:19:59.246 "traddr": "0000:00:11.0" 00:19:59.246 }, 00:19:59.246 "ctrlr_data": { 00:19:59.246 "cntlid": 0, 00:19:59.246 "vendor_id": "0x1b36", 00:19:59.246 "model_number": "QEMU NVMe Ctrl", 00:19:59.246 "serial_number": "12341", 00:19:59.246 "firmware_revision": "8.0.0", 00:19:59.246 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:59.246 "oacs": { 00:19:59.246 "security": 0, 00:19:59.246 "format": 1, 00:19:59.246 "firmware": 0, 00:19:59.246 "ns_manage": 1 00:19:59.246 }, 00:19:59.246 "multi_ctrlr": false, 00:19:59.246 "ana_reporting": false 00:19:59.246 }, 00:19:59.246 "vs": { 00:19:59.246 "nvme_version": "1.4" 00:19:59.246 }, 00:19:59.246 "ns_data": { 00:19:59.246 "id": 1, 00:19:59.246 "can_share": false 00:19:59.246 } 00:19:59.246 } 00:19:59.246 ], 00:19:59.246 "mp_policy": "active_passive" 00:19:59.246 } 00:19:59.246 } 00:19:59.246 ]' 00:19:59.246 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:59.505 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:59.764 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:19:59.764 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:59.764 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c 00:19:59.764 17:54:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:00.023 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.282 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:00.282 { 00:20:00.283 "name": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:00.283 "aliases": [ 00:20:00.283 "lvs/nvme0n1p0" 00:20:00.283 ], 00:20:00.283 "product_name": "Logical Volume", 00:20:00.283 "block_size": 4096, 00:20:00.283 "num_blocks": 26476544, 00:20:00.283 "uuid": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:00.283 "assigned_rate_limits": { 00:20:00.283 "rw_ios_per_sec": 0, 00:20:00.283 "rw_mbytes_per_sec": 0, 00:20:00.283 "r_mbytes_per_sec": 0, 00:20:00.283 "w_mbytes_per_sec": 0 00:20:00.283 }, 00:20:00.283 "claimed": false, 00:20:00.283 "zoned": false, 00:20:00.283 "supported_io_types": { 00:20:00.283 "read": true, 00:20:00.283 "write": true, 00:20:00.283 "unmap": true, 00:20:00.283 "flush": false, 00:20:00.283 "reset": true, 00:20:00.283 "nvme_admin": false, 00:20:00.283 "nvme_io": false, 00:20:00.283 "nvme_io_md": false, 00:20:00.283 "write_zeroes": true, 00:20:00.283 "zcopy": false, 00:20:00.283 "get_zone_info": false, 00:20:00.283 "zone_management": false, 00:20:00.283 "zone_append": false, 00:20:00.283 "compare": false, 00:20:00.283 "compare_and_write": false, 00:20:00.283 "abort": false, 00:20:00.283 "seek_hole": true, 00:20:00.283 "seek_data": true, 00:20:00.283 "copy": false, 00:20:00.283 "nvme_iov_md": false 00:20:00.283 }, 00:20:00.283 "driver_specific": { 00:20:00.283 "lvol": { 00:20:00.283 "lvol_store_uuid": "4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c", 00:20:00.283 "base_bdev": "nvme0n1", 00:20:00.283 "thin_provision": true, 00:20:00.283 "num_allocated_clusters": 0, 00:20:00.283 "snapshot": false, 00:20:00.283 "clone": false, 00:20:00.283 "esnap_clone": false 00:20:00.283 } 00:20:00.283 } 00:20:00.283 } 00:20:00.283 ]' 00:20:00.283 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:00.283 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:00.283 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:20:00.542 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:00.801 { 00:20:00.801 "name": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:00.801 "aliases": [ 00:20:00.801 "lvs/nvme0n1p0" 00:20:00.801 ], 00:20:00.801 "product_name": "Logical Volume", 00:20:00.801 "block_size": 4096, 00:20:00.801 "num_blocks": 26476544, 00:20:00.801 "uuid": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:00.801 "assigned_rate_limits": { 00:20:00.801 "rw_ios_per_sec": 0, 00:20:00.801 "rw_mbytes_per_sec": 0, 00:20:00.801 "r_mbytes_per_sec": 0, 00:20:00.801 "w_mbytes_per_sec": 0 00:20:00.801 }, 00:20:00.801 "claimed": false, 00:20:00.801 "zoned": false, 00:20:00.801 "supported_io_types": { 00:20:00.801 "read": true, 00:20:00.801 "write": true, 00:20:00.801 "unmap": true, 00:20:00.801 "flush": false, 00:20:00.801 "reset": true, 00:20:00.801 "nvme_admin": false, 00:20:00.801 "nvme_io": false, 00:20:00.801 "nvme_io_md": false, 00:20:00.801 "write_zeroes": true, 00:20:00.801 "zcopy": false, 00:20:00.801 "get_zone_info": false, 00:20:00.801 "zone_management": false, 00:20:00.801 "zone_append": false, 00:20:00.801 "compare": false, 00:20:00.801 "compare_and_write": false, 00:20:00.801 "abort": false, 00:20:00.801 "seek_hole": true, 00:20:00.801 "seek_data": true, 00:20:00.801 "copy": false, 00:20:00.801 "nvme_iov_md": false 00:20:00.801 }, 00:20:00.801 "driver_specific": { 00:20:00.801 "lvol": { 00:20:00.801 "lvol_store_uuid": "4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c", 00:20:00.801 "base_bdev": "nvme0n1", 00:20:00.801 "thin_provision": true, 00:20:00.801 "num_allocated_clusters": 0, 00:20:00.801 "snapshot": false, 00:20:00.801 "clone": false, 00:20:00.801 "esnap_clone": false 00:20:00.801 } 00:20:00.801 } 00:20:00.801 } 00:20:00.801 ]' 00:20:00.801 17:54:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:20:01.060 17:54:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:20:01.319 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=37805023-1674-46ce-8b37-cee13b76b5fe 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:20:01.319 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 37805023-1674-46ce-8b37-cee13b76b5fe 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:01.579 { 00:20:01.579 "name": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:01.579 "aliases": [ 00:20:01.579 "lvs/nvme0n1p0" 00:20:01.579 ], 00:20:01.579 "product_name": "Logical Volume", 00:20:01.579 "block_size": 4096, 00:20:01.579 "num_blocks": 26476544, 00:20:01.579 "uuid": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:01.579 "assigned_rate_limits": { 00:20:01.579 "rw_ios_per_sec": 0, 00:20:01.579 "rw_mbytes_per_sec": 0, 00:20:01.579 "r_mbytes_per_sec": 0, 00:20:01.579 "w_mbytes_per_sec": 0 00:20:01.579 }, 00:20:01.579 "claimed": false, 00:20:01.579 "zoned": false, 00:20:01.579 "supported_io_types": { 00:20:01.579 "read": true, 00:20:01.579 "write": true, 00:20:01.579 "unmap": true, 00:20:01.579 "flush": false, 00:20:01.579 "reset": true, 00:20:01.579 "nvme_admin": false, 00:20:01.579 "nvme_io": false, 00:20:01.579 "nvme_io_md": false, 00:20:01.579 "write_zeroes": true, 00:20:01.579 "zcopy": false, 00:20:01.579 "get_zone_info": false, 00:20:01.579 "zone_management": false, 00:20:01.579 "zone_append": false, 00:20:01.579 "compare": false, 00:20:01.579 "compare_and_write": false, 00:20:01.579 "abort": false, 00:20:01.579 "seek_hole": true, 00:20:01.579 "seek_data": true, 00:20:01.579 "copy": false, 00:20:01.579 "nvme_iov_md": false 00:20:01.579 }, 00:20:01.579 "driver_specific": { 00:20:01.579 "lvol": { 00:20:01.579 "lvol_store_uuid": "4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c", 00:20:01.579 "base_bdev": "nvme0n1", 00:20:01.579 "thin_provision": true, 00:20:01.579 "num_allocated_clusters": 0, 00:20:01.579 "snapshot": false, 00:20:01.579 "clone": false, 00:20:01.579 "esnap_clone": false 00:20:01.579 } 00:20:01.579 } 00:20:01.579 } 00:20:01.579 ]' 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:20:01.579 17:54:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 37805023-1674-46ce-8b37-cee13b76b5fe -c nvc0n1p0 --l2p_dram_limit 60 00:20:01.839 [2024-11-20 17:54:28.811716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.839 [2024-11-20 17:54:28.811763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:01.839 [2024-11-20 17:54:28.811794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:01.839 [2024-11-20 17:54:28.811805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.839 [2024-11-20 17:54:28.811887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.839 [2024-11-20 17:54:28.811904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:01.839 [2024-11-20 17:54:28.811917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:01.839 [2024-11-20 17:54:28.811927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.839 [2024-11-20 17:54:28.811963] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:01.839 [2024-11-20 17:54:28.812969] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:01.839 [2024-11-20 17:54:28.813004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.839 [2024-11-20 17:54:28.813015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:01.839 [2024-11-20 17:54:28.813028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:20:01.839 [2024-11-20 17:54:28.813038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.839 [2024-11-20 17:54:28.813151] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e698cc5b-0bac-40ff-b27e-ba23cb9f865e 00:20:01.839 [2024-11-20 17:54:28.814595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.839 [2024-11-20 17:54:28.814634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:01.839 [2024-11-20 17:54:28.814646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:01.839 [2024-11-20 17:54:28.814659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.839 [2024-11-20 17:54:28.821977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.839 [2024-11-20 17:54:28.822012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:01.839 [2024-11-20 17:54:28.822024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.261 ms 00:20:01.840 [2024-11-20 17:54:28.822037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.822148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.822164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:01.840 [2024-11-20 17:54:28.822176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:20:01.840 [2024-11-20 17:54:28.822192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.822287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.822309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:01.840 [2024-11-20 17:54:28.822320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:01.840 [2024-11-20 17:54:28.822333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.822376] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:01.840 [2024-11-20 17:54:28.827106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.827138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:01.840 [2024-11-20 17:54:28.827155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.743 ms 00:20:01.840 [2024-11-20 17:54:28.827168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.827219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.827231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:01.840 [2024-11-20 17:54:28.827244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:01.840 [2024-11-20 17:54:28.827254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.827306] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:01.840 [2024-11-20 17:54:28.827463] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:01.840 [2024-11-20 17:54:28.827486] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:01.840 [2024-11-20 17:54:28.827500] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:01.840 [2024-11-20 17:54:28.827516] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:01.840 [2024-11-20 17:54:28.827528] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:01.840 [2024-11-20 17:54:28.827542] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:01.840 [2024-11-20 17:54:28.827552] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:01.840 [2024-11-20 17:54:28.827564] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:01.840 [2024-11-20 17:54:28.827574] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:01.840 [2024-11-20 17:54:28.827588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.827600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:01.840 [2024-11-20 17:54:28.827615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:20:01.840 [2024-11-20 17:54:28.827625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.827714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.840 [2024-11-20 17:54:28.827725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:01.840 [2024-11-20 17:54:28.827738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:20:01.840 [2024-11-20 17:54:28.827748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.840 [2024-11-20 17:54:28.827868] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:01.840 [2024-11-20 17:54:28.827881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:01.840 [2024-11-20 17:54:28.827897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.840 [2024-11-20 17:54:28.827907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.827920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:01.840 [2024-11-20 17:54:28.827929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.827941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:01.840 [2024-11-20 17:54:28.827950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:01.840 [2024-11-20 17:54:28.827962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:01.840 [2024-11-20 17:54:28.827972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.840 [2024-11-20 17:54:28.827986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:01.840 [2024-11-20 17:54:28.827995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:01.840 [2024-11-20 17:54:28.828006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:01.840 [2024-11-20 17:54:28.828015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:01.840 [2024-11-20 17:54:28.828027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:01.840 [2024-11-20 17:54:28.828037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:01.840 [2024-11-20 17:54:28.828062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:01.840 [2024-11-20 17:54:28.828101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:01.840 [2024-11-20 17:54:28.828134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:01.840 [2024-11-20 17:54:28.828173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:01.840 [2024-11-20 17:54:28.828206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:01.840 [2024-11-20 17:54:28.828248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.840 [2024-11-20 17:54:28.828269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:01.840 [2024-11-20 17:54:28.828293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:01.840 [2024-11-20 17:54:28.828305] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:01.840 [2024-11-20 17:54:28.828315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:01.840 [2024-11-20 17:54:28.828326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:01.840 [2024-11-20 17:54:28.828335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:01.840 [2024-11-20 17:54:28.828356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:01.840 [2024-11-20 17:54:28.828370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828379] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:01.840 [2024-11-20 17:54:28.828392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:01.840 [2024-11-20 17:54:28.828402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:01.840 [2024-11-20 17:54:28.828424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:01.840 [2024-11-20 17:54:28.828439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:01.840 [2024-11-20 17:54:28.828448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:01.840 [2024-11-20 17:54:28.828460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:01.840 [2024-11-20 17:54:28.828470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:01.840 [2024-11-20 17:54:28.828481] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:01.840 [2024-11-20 17:54:28.828495] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:01.840 [2024-11-20 17:54:28.828509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.840 [2024-11-20 17:54:28.828521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:01.840 [2024-11-20 17:54:28.828534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:01.840 [2024-11-20 17:54:28.828544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:01.840 [2024-11-20 17:54:28.828557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:01.840 [2024-11-20 17:54:28.828567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:01.840 [2024-11-20 17:54:28.828580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:01.840 [2024-11-20 17:54:28.828590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:01.840 [2024-11-20 17:54:28.828602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:01.840 [2024-11-20 17:54:28.828612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:01.840 [2024-11-20 17:54:28.828627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:01.841 [2024-11-20 17:54:28.828684] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:01.841 [2024-11-20 17:54:28.828697] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:01.841 [2024-11-20 17:54:28.828724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:01.841 [2024-11-20 17:54:28.828734] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:01.841 [2024-11-20 17:54:28.828748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:01.841 [2024-11-20 17:54:28.828758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:01.841 [2024-11-20 17:54:28.828782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:01.841 [2024-11-20 17:54:28.828793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:20:01.841 [2024-11-20 17:54:28.828805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:01.841 [2024-11-20 17:54:28.828874] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:01.841 [2024-11-20 17:54:28.828892] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:06.037 [2024-11-20 17:54:32.660270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.660347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:06.037 [2024-11-20 17:54:32.660364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3837.609 ms 00:20:06.037 [2024-11-20 17:54:32.660378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.699170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.699230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.037 [2024-11-20 17:54:32.699246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.423 ms 00:20:06.037 [2024-11-20 17:54:32.699260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.699406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.699422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.037 [2024-11-20 17:54:32.699435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:06.037 [2024-11-20 17:54:32.699450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.768395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.768447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.037 [2024-11-20 17:54:32.768467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.002 ms 00:20:06.037 [2024-11-20 17:54:32.768480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.768546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.768564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.037 [2024-11-20 17:54:32.768575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:20:06.037 [2024-11-20 17:54:32.768588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.769100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.769126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.037 [2024-11-20 17:54:32.769137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:20:06.037 [2024-11-20 17:54:32.769153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.769272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.769289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.037 [2024-11-20 17:54:32.769300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:20:06.037 [2024-11-20 17:54:32.769316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.037 [2024-11-20 17:54:32.790405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.037 [2024-11-20 17:54:32.790452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.038 [2024-11-20 17:54:32.790467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.094 ms 00:20:06.038 [2024-11-20 17:54:32.790480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.803025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:06.038 [2024-11-20 17:54:32.819338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.819408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.038 [2024-11-20 17:54:32.819426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.783 ms 00:20:06.038 [2024-11-20 17:54:32.819441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.909971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.910022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:06.038 [2024-11-20 17:54:32.910045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.618 ms 00:20:06.038 [2024-11-20 17:54:32.910056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.910260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.910274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.038 [2024-11-20 17:54:32.910290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:20:06.038 [2024-11-20 17:54:32.910301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.946430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.946477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:06.038 [2024-11-20 17:54:32.946494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.123 ms 00:20:06.038 [2024-11-20 17:54:32.946505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.981838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.981878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:06.038 [2024-11-20 17:54:32.981895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.333 ms 00:20:06.038 [2024-11-20 17:54:32.981905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:32.982659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:32.982688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.038 [2024-11-20 17:54:32.982709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.710 ms 00:20:06.038 [2024-11-20 17:54:32.982719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.086885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.086932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:06.038 [2024-11-20 17:54:33.086954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.248 ms 00:20:06.038 [2024-11-20 17:54:33.086969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.124042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.124084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:06.038 [2024-11-20 17:54:33.124102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.027 ms 00:20:06.038 [2024-11-20 17:54:33.124114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.161324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.161368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:06.038 [2024-11-20 17:54:33.161385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.213 ms 00:20:06.038 [2024-11-20 17:54:33.161396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.198724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.198775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.038 [2024-11-20 17:54:33.198793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.333 ms 00:20:06.038 [2024-11-20 17:54:33.198803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.198857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.198869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.038 [2024-11-20 17:54:33.198889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:06.038 [2024-11-20 17:54:33.198900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.199029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.038 [2024-11-20 17:54:33.199042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.038 [2024-11-20 17:54:33.199055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:06.038 [2024-11-20 17:54:33.199069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.038 [2024-11-20 17:54:33.200348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4395.214 ms, result 0 00:20:06.038 { 00:20:06.038 "name": "ftl0", 00:20:06.038 "uuid": "e698cc5b-0bac-40ff-b27e-ba23cb9f865e" 00:20:06.038 } 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:20:06.298 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:20:06.557 [ 00:20:06.557 { 00:20:06.557 "name": "ftl0", 00:20:06.557 "aliases": [ 00:20:06.557 "e698cc5b-0bac-40ff-b27e-ba23cb9f865e" 00:20:06.558 ], 00:20:06.558 "product_name": "FTL disk", 00:20:06.558 "block_size": 4096, 00:20:06.558 "num_blocks": 20971520, 00:20:06.558 "uuid": "e698cc5b-0bac-40ff-b27e-ba23cb9f865e", 00:20:06.558 "assigned_rate_limits": { 00:20:06.558 "rw_ios_per_sec": 0, 00:20:06.558 "rw_mbytes_per_sec": 0, 00:20:06.558 "r_mbytes_per_sec": 0, 00:20:06.558 "w_mbytes_per_sec": 0 00:20:06.558 }, 00:20:06.558 "claimed": false, 00:20:06.558 "zoned": false, 00:20:06.558 "supported_io_types": { 00:20:06.558 "read": true, 00:20:06.558 "write": true, 00:20:06.558 "unmap": true, 00:20:06.558 "flush": true, 00:20:06.558 "reset": false, 00:20:06.558 "nvme_admin": false, 00:20:06.558 "nvme_io": false, 00:20:06.558 "nvme_io_md": false, 00:20:06.558 "write_zeroes": true, 00:20:06.558 "zcopy": false, 00:20:06.558 "get_zone_info": false, 00:20:06.558 "zone_management": false, 00:20:06.558 "zone_append": false, 00:20:06.558 "compare": false, 00:20:06.558 "compare_and_write": false, 00:20:06.558 "abort": false, 00:20:06.558 "seek_hole": false, 00:20:06.558 "seek_data": false, 00:20:06.558 "copy": false, 00:20:06.558 "nvme_iov_md": false 00:20:06.558 }, 00:20:06.558 "driver_specific": { 00:20:06.558 "ftl": { 00:20:06.558 "base_bdev": "37805023-1674-46ce-8b37-cee13b76b5fe", 00:20:06.558 "cache": "nvc0n1p0" 00:20:06.558 } 00:20:06.558 } 00:20:06.558 } 00:20:06.558 ] 00:20:06.558 17:54:33 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:20:06.558 17:54:33 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:20:06.558 17:54:33 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.817 17:54:33 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:20:06.817 17:54:33 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:07.077 [2024-11-20 17:54:34.045641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.045701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:07.077 [2024-11-20 17:54:34.045724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.077 [2024-11-20 17:54:34.045738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.045798] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.077 [2024-11-20 17:54:34.050039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.050075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.077 [2024-11-20 17:54:34.050091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.221 ms 00:20:07.077 [2024-11-20 17:54:34.050102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.051042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.051069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.077 [2024-11-20 17:54:34.051084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:20:07.077 [2024-11-20 17:54:34.051095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.053671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.053695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.077 [2024-11-20 17:54:34.053718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.534 ms 00:20:07.077 [2024-11-20 17:54:34.053729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.059612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.059652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.077 [2024-11-20 17:54:34.059669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.072 ms 00:20:07.077 [2024-11-20 17:54:34.059680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.096522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.096564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.077 [2024-11-20 17:54:34.096581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.764 ms 00:20:07.077 [2024-11-20 17:54:34.096591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.119648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.119690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.077 [2024-11-20 17:54:34.119710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.981 ms 00:20:07.077 [2024-11-20 17:54:34.119721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.120010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.120025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.077 [2024-11-20 17:54:34.120039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:20:07.077 [2024-11-20 17:54:34.120050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.156691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.156729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.077 [2024-11-20 17:54:34.156746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.645 ms 00:20:07.077 [2024-11-20 17:54:34.156756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.194058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.194097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.077 [2024-11-20 17:54:34.194113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.282 ms 00:20:07.077 [2024-11-20 17:54:34.194123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.077 [2024-11-20 17:54:34.230104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.077 [2024-11-20 17:54:34.230142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.077 [2024-11-20 17:54:34.230157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.974 ms 00:20:07.077 [2024-11-20 17:54:34.230167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.339 [2024-11-20 17:54:34.266813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.339 [2024-11-20 17:54:34.266853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:07.339 [2024-11-20 17:54:34.266869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.536 ms 00:20:07.339 [2024-11-20 17:54:34.266879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.339 [2024-11-20 17:54:34.266968] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:07.339 [2024-11-20 17:54:34.266986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:07.339 [2024-11-20 17:54:34.267855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.267990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:07.340 [2024-11-20 17:54:34.268295] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:07.340 [2024-11-20 17:54:34.268308] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e698cc5b-0bac-40ff-b27e-ba23cb9f865e 00:20:07.340 [2024-11-20 17:54:34.268319] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:07.340 [2024-11-20 17:54:34.268334] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:07.340 [2024-11-20 17:54:34.268343] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:07.340 [2024-11-20 17:54:34.268360] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:07.340 [2024-11-20 17:54:34.268369] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:07.340 [2024-11-20 17:54:34.268382] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:07.340 [2024-11-20 17:54:34.268393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:07.340 [2024-11-20 17:54:34.268404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:07.340 [2024-11-20 17:54:34.268413] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:07.340 [2024-11-20 17:54:34.268426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.340 [2024-11-20 17:54:34.268436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:07.340 [2024-11-20 17:54:34.268449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.462 ms 00:20:07.340 [2024-11-20 17:54:34.268459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.288511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.340 [2024-11-20 17:54:34.288552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:07.340 [2024-11-20 17:54:34.288568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.002 ms 00:20:07.340 [2024-11-20 17:54:34.288579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.289148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.340 [2024-11-20 17:54:34.289167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:07.340 [2024-11-20 17:54:34.289181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:20:07.340 [2024-11-20 17:54:34.289191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.358997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.340 [2024-11-20 17:54:34.359047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.340 [2024-11-20 17:54:34.359064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.340 [2024-11-20 17:54:34.359075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.359170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.340 [2024-11-20 17:54:34.359181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.340 [2024-11-20 17:54:34.359195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.340 [2024-11-20 17:54:34.359205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.359351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.340 [2024-11-20 17:54:34.359369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.340 [2024-11-20 17:54:34.359382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.340 [2024-11-20 17:54:34.359393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.359444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.340 [2024-11-20 17:54:34.359454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.340 [2024-11-20 17:54:34.359467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.340 [2024-11-20 17:54:34.359478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.340 [2024-11-20 17:54:34.492234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.340 [2024-11-20 17:54:34.492286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.340 [2024-11-20 17:54:34.492305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.340 [2024-11-20 17:54:34.492317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.592942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.592996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.601 [2024-11-20 17:54:34.593014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.601 [2024-11-20 17:54:34.593202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.601 [2024-11-20 17:54:34.593374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.601 [2024-11-20 17:54:34.593580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.601 [2024-11-20 17:54:34.593704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.601 [2024-11-20 17:54:34.593843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.593925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.601 [2024-11-20 17:54:34.593937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.601 [2024-11-20 17:54:34.593950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.601 [2024-11-20 17:54:34.593960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.601 [2024-11-20 17:54:34.594212] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 549.437 ms, result 0 00:20:07.601 true 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76870 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76870 ']' 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76870 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76870 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.601 killing process with pid 76870 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76870' 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76870 00:20:07.601 17:54:34 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76870 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:12.930 17:54:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:20:12.930 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:20:12.930 fio-3.35 00:20:12.930 Starting 1 thread 00:20:18.218 00:20:18.218 test: (groupid=0, jobs=1): err= 0: pid=77093: Wed Nov 20 17:54:45 2024 00:20:18.218 read: IOPS=950, BW=63.1MiB/s (66.2MB/s)(255MiB/4031msec) 00:20:18.218 slat (nsec): min=4533, max=46707, avg=7045.70, stdev=3072.45 00:20:18.218 clat (usec): min=311, max=998, avg=478.16, stdev=59.17 00:20:18.218 lat (usec): min=322, max=1006, avg=485.21, stdev=59.40 00:20:18.218 clat percentiles (usec): 00:20:18.218 | 1.00th=[ 359], 5.00th=[ 392], 10.00th=[ 396], 20.00th=[ 445], 00:20:18.218 | 30.00th=[ 457], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 482], 00:20:18.218 | 70.00th=[ 519], 80.00th=[ 529], 90.00th=[ 537], 95.00th=[ 553], 00:20:18.218 | 99.00th=[ 644], 99.50th=[ 725], 99.90th=[ 914], 99.95th=[ 988], 00:20:18.218 | 99.99th=[ 996] 00:20:18.218 write: IOPS=957, BW=63.6MiB/s (66.7MB/s)(256MiB/4027msec); 0 zone resets 00:20:18.218 slat (usec): min=15, max=116, avg=20.80, stdev= 5.30 00:20:18.218 clat (usec): min=334, max=1030, avg=530.60, stdev=74.65 00:20:18.218 lat (usec): min=358, max=1050, avg=551.40, stdev=74.51 00:20:18.218 clat percentiles (usec): 00:20:18.218 | 1.00th=[ 404], 5.00th=[ 420], 10.00th=[ 465], 20.00th=[ 478], 00:20:18.218 | 30.00th=[ 482], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 545], 00:20:18.218 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 611], 95.00th=[ 627], 00:20:18.218 | 99.00th=[ 857], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 996], 00:20:18.218 | 99.99th=[ 1029] 00:20:18.218 bw ( KiB/s): min=61608, max=68000, per=100.00%, avg=65127.00, stdev=2217.42, samples=8 00:20:18.218 iops : min= 906, max= 1000, avg=957.75, stdev=32.61, samples=8 00:20:18.218 lat (usec) : 500=51.24%, 750=47.56%, 1000=1.18% 00:20:18.218 lat (msec) : 2=0.01% 00:20:18.218 cpu : usr=99.21%, sys=0.05%, ctx=6, majf=0, minf=1169 00:20:18.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.218 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:18.218 00:20:18.218 Run status group 0 (all jobs): 00:20:18.218 READ: bw=63.1MiB/s (66.2MB/s), 63.1MiB/s-63.1MiB/s (66.2MB/s-66.2MB/s), io=255MiB (267MB), run=4031-4031msec 00:20:18.218 WRITE: bw=63.6MiB/s (66.7MB/s), 63.6MiB/s-63.6MiB/s (66.7MB/s-66.7MB/s), io=256MiB (269MB), run=4027-4027msec 00:20:20.120 ----------------------------------------------------- 00:20:20.120 Suppressions used: 00:20:20.120 count bytes template 00:20:20.120 1 5 /usr/src/fio/parse.c 00:20:20.120 1 8 libtcmalloc_minimal.so 00:20:20.120 1 904 libcrypto.so 00:20:20.120 ----------------------------------------------------- 00:20:20.120 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:20.121 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:20.379 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:20.379 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:20.379 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:20.379 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:20.379 17:54:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:20:20.379 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:20.379 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:20.379 fio-3.35 00:20:20.379 Starting 2 threads 00:20:52.458 00:20:52.458 first_half: (groupid=0, jobs=1): err= 0: pid=77196: Wed Nov 20 17:55:14 2024 00:20:52.458 read: IOPS=2523, BW=9.86MiB/s (10.3MB/s)(256MiB/25947msec) 00:20:52.458 slat (nsec): min=3527, max=52004, avg=8867.63, stdev=3894.22 00:20:52.458 clat (usec): min=780, max=279002, avg=42775.85, stdev=27055.63 00:20:52.458 lat (usec): min=784, max=279009, avg=42784.72, stdev=27056.27 00:20:52.458 clat percentiles (msec): 00:20:52.458 | 1.00th=[ 13], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 34], 00:20:52.458 | 30.00th=[ 34], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 38], 00:20:52.458 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 50], 95.00th=[ 88], 00:20:52.458 | 99.00th=[ 184], 99.50th=[ 205], 99.90th=[ 232], 99.95th=[ 255], 00:20:52.458 | 99.99th=[ 271] 00:20:52.458 write: IOPS=2530, BW=9.88MiB/s (10.4MB/s)(256MiB/25903msec); 0 zone resets 00:20:52.458 slat (usec): min=4, max=218, avg= 8.32, stdev= 4.34 00:20:52.458 clat (usec): min=432, max=52632, avg=7913.43, stdev=6941.15 00:20:52.458 lat (usec): min=448, max=52641, avg=7921.76, stdev=6941.12 00:20:52.458 clat percentiles (usec): 00:20:52.458 | 1.00th=[ 1074], 5.00th=[ 1418], 10.00th=[ 1795], 20.00th=[ 3326], 00:20:52.458 | 30.00th=[ 4752], 40.00th=[ 5800], 50.00th=[ 6718], 60.00th=[ 7570], 00:20:52.458 | 70.00th=[ 8586], 80.00th=[10159], 90.00th=[12911], 95.00th=[21103], 00:20:52.458 | 99.00th=[37487], 99.50th=[40109], 99.90th=[49021], 99.95th=[50070], 00:20:52.458 | 99.99th=[51643] 00:20:52.458 bw ( KiB/s): min= 352, max=48120, per=100.00%, avg=23680.00, stdev=14964.46, samples=22 00:20:52.458 iops : min= 88, max=12030, avg=5920.00, stdev=3741.11, samples=22 00:20:52.458 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.25% 00:20:52.458 lat (msec) : 2=5.67%, 4=6.85%, 10=26.92%, 20=9.17%, 50=46.93% 00:20:52.458 lat (msec) : 100=1.99%, 250=2.14%, 500=0.03% 00:20:52.458 cpu : usr=99.22%, sys=0.15%, ctx=34, majf=0, minf=5532 00:20:52.458 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:52.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.458 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.458 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.458 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.458 second_half: (groupid=0, jobs=1): err= 0: pid=77197: Wed Nov 20 17:55:14 2024 00:20:52.458 read: IOPS=2548, BW=9.95MiB/s (10.4MB/s)(256MiB/25699msec) 00:20:52.458 slat (usec): min=3, max=103, avg= 8.27, stdev= 3.81 00:20:52.458 clat (msec): min=10, max=271, avg=43.05, stdev=25.31 00:20:52.458 lat (msec): min=10, max=271, avg=43.06, stdev=25.31 00:20:52.458 clat percentiles (msec): 00:20:52.458 | 1.00th=[ 30], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:20:52.458 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 37], 60.00th=[ 38], 00:20:52.458 | 70.00th=[ 41], 80.00th=[ 42], 90.00th=[ 50], 95.00th=[ 79], 00:20:52.458 | 99.00th=[ 184], 99.50th=[ 209], 99.90th=[ 230], 99.95th=[ 232], 00:20:52.458 | 99.99th=[ 257] 00:20:52.458 write: IOPS=2564, BW=10.0MiB/s (10.5MB/s)(256MiB/25555msec); 0 zone resets 00:20:52.458 slat (usec): min=4, max=438, avg= 8.59, stdev= 6.13 00:20:52.458 clat (usec): min=400, max=42362, avg=7141.14, stdev=4000.60 00:20:52.458 lat (usec): min=409, max=42385, avg=7149.74, stdev=4000.64 00:20:52.458 clat percentiles (usec): 00:20:52.458 | 1.00th=[ 1270], 5.00th=[ 2114], 10.00th=[ 2868], 20.00th=[ 4146], 00:20:52.458 | 30.00th=[ 5145], 40.00th=[ 5932], 50.00th=[ 6521], 60.00th=[ 7308], 00:20:52.458 | 70.00th=[ 8094], 80.00th=[ 9765], 90.00th=[12125], 95.00th=[13304], 00:20:52.458 | 99.00th=[20055], 99.50th=[28967], 99.90th=[39584], 99.95th=[40633], 00:20:52.458 | 99.99th=[41681] 00:20:52.458 bw ( KiB/s): min= 1864, max=40864, per=100.00%, avg=20942.08, stdev=11553.74, samples=25 00:20:52.458 iops : min= 466, max=10216, avg=5235.52, stdev=2888.44, samples=25 00:20:52.458 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.14% 00:20:52.459 lat (msec) : 2=1.86%, 4=7.39%, 10=31.29%, 20=8.80%, 50=46.12% 00:20:52.459 lat (msec) : 100=2.37%, 250=1.96%, 500=0.01% 00:20:52.459 cpu : usr=99.26%, sys=0.16%, ctx=52, majf=0, minf=5571 00:20:52.459 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:20:52.459 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:52.459 complete : 0=0.0%, 4=99.8%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:52.459 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:52.459 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:52.459 00:20:52.459 Run status group 0 (all jobs): 00:20:52.459 READ: bw=19.7MiB/s (20.7MB/s), 9.86MiB/s-9.95MiB/s (10.3MB/s-10.4MB/s), io=512MiB (536MB), run=25699-25947msec 00:20:52.459 WRITE: bw=19.8MiB/s (20.7MB/s), 9.88MiB/s-10.0MiB/s (10.4MB/s-10.5MB/s), io=512MiB (537MB), run=25555-25903msec 00:20:52.459 ----------------------------------------------------- 00:20:52.459 Suppressions used: 00:20:52.459 count bytes template 00:20:52.459 2 10 /usr/src/fio/parse.c 00:20:52.459 3 288 /usr/src/fio/iolog.c 00:20:52.459 1 8 libtcmalloc_minimal.so 00:20:52.459 1 904 libcrypto.so 00:20:52.459 ----------------------------------------------------- 00:20:52.459 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:52.459 17:55:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:20:52.459 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:20:52.459 fio-3.35 00:20:52.459 Starting 1 thread 00:21:07.342 00:21:07.342 test: (groupid=0, jobs=1): err= 0: pid=77539: Wed Nov 20 17:55:31 2024 00:21:07.342 read: IOPS=8144, BW=31.8MiB/s (33.4MB/s)(255MiB/8006msec) 00:21:07.342 slat (nsec): min=3363, max=31876, avg=5167.60, stdev=1551.63 00:21:07.342 clat (usec): min=608, max=30748, avg=15707.92, stdev=739.20 00:21:07.342 lat (usec): min=612, max=30754, avg=15713.09, stdev=739.21 00:21:07.342 clat percentiles (usec): 00:21:07.342 | 1.00th=[14746], 5.00th=[15008], 10.00th=[15139], 20.00th=[15270], 00:21:07.342 | 30.00th=[15401], 40.00th=[15533], 50.00th=[15664], 60.00th=[15795], 00:21:07.342 | 70.00th=[15926], 80.00th=[16057], 90.00th=[16319], 95.00th=[16581], 00:21:07.342 | 99.00th=[17695], 99.50th=[17957], 99.90th=[22676], 99.95th=[26608], 00:21:07.342 | 99.99th=[30016] 00:21:07.342 write: IOPS=14.0k, BW=54.7MiB/s (57.3MB/s)(256MiB/4684msec); 0 zone resets 00:21:07.342 slat (usec): min=4, max=659, avg= 7.67, stdev= 6.38 00:21:07.342 clat (usec): min=524, max=52254, avg=9102.01, stdev=11156.26 00:21:07.343 lat (usec): min=532, max=52261, avg=9109.68, stdev=11156.30 00:21:07.343 clat percentiles (usec): 00:21:07.343 | 1.00th=[ 930], 5.00th=[ 1123], 10.00th=[ 1254], 20.00th=[ 1434], 00:21:07.343 | 30.00th=[ 1614], 40.00th=[ 1942], 50.00th=[ 5932], 60.00th=[ 6783], 00:21:07.343 | 70.00th=[ 7832], 80.00th=[ 9765], 90.00th=[33162], 95.00th=[34866], 00:21:07.343 | 99.00th=[36963], 99.50th=[37487], 99.90th=[39584], 99.95th=[42730], 00:21:07.343 | 99.99th=[49021] 00:21:07.343 bw ( KiB/s): min=17232, max=78696, per=93.68%, avg=52428.80, stdev=15800.02, samples=10 00:21:07.343 iops : min= 4308, max=19674, avg=13107.20, stdev=3950.01, samples=10 00:21:07.343 lat (usec) : 750=0.03%, 1000=1.01% 00:21:07.343 lat (msec) : 2=19.28%, 4=0.84%, 10=19.37%, 20=51.43%, 50=8.04% 00:21:07.343 lat (msec) : 100=0.01% 00:21:07.343 cpu : usr=99.04%, sys=0.28%, ctx=28, majf=0, minf=5565 00:21:07.343 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:07.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.343 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.343 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.343 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.343 00:21:07.343 Run status group 0 (all jobs): 00:21:07.343 READ: bw=31.8MiB/s (33.4MB/s), 31.8MiB/s-31.8MiB/s (33.4MB/s-33.4MB/s), io=255MiB (267MB), run=8006-8006msec 00:21:07.343 WRITE: bw=54.7MiB/s (57.3MB/s), 54.7MiB/s-54.7MiB/s (57.3MB/s-57.3MB/s), io=256MiB (268MB), run=4684-4684msec 00:21:07.343 ----------------------------------------------------- 00:21:07.343 Suppressions used: 00:21:07.343 count bytes template 00:21:07.343 1 5 /usr/src/fio/parse.c 00:21:07.343 2 192 /usr/src/fio/iolog.c 00:21:07.343 1 8 libtcmalloc_minimal.so 00:21:07.343 1 904 libcrypto.so 00:21:07.343 ----------------------------------------------------- 00:21:07.343 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:21:07.343 Remove shared memory files 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57770 /dev/shm/spdk_tgt_trace.pid75764 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:21:07.343 00:21:07.343 real 1m9.749s 00:21:07.343 user 2m32.234s 00:21:07.343 sys 0m3.763s 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:07.343 17:55:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:21:07.343 ************************************ 00:21:07.343 END TEST ftl_fio_basic 00:21:07.343 ************************************ 00:21:07.343 17:55:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:07.343 17:55:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:07.343 17:55:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:07.343 17:55:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:07.343 ************************************ 00:21:07.343 START TEST ftl_bdevperf 00:21:07.343 ************************************ 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:21:07.343 * Looking for test storage... 00:21:07.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.343 --rc genhtml_branch_coverage=1 00:21:07.343 --rc genhtml_function_coverage=1 00:21:07.343 --rc genhtml_legend=1 00:21:07.343 --rc geninfo_all_blocks=1 00:21:07.343 --rc geninfo_unexecuted_blocks=1 00:21:07.343 00:21:07.343 ' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.343 --rc genhtml_branch_coverage=1 00:21:07.343 --rc genhtml_function_coverage=1 00:21:07.343 --rc genhtml_legend=1 00:21:07.343 --rc geninfo_all_blocks=1 00:21:07.343 --rc geninfo_unexecuted_blocks=1 00:21:07.343 00:21:07.343 ' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.343 --rc genhtml_branch_coverage=1 00:21:07.343 --rc genhtml_function_coverage=1 00:21:07.343 --rc genhtml_legend=1 00:21:07.343 --rc geninfo_all_blocks=1 00:21:07.343 --rc geninfo_unexecuted_blocks=1 00:21:07.343 00:21:07.343 ' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:07.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:07.343 --rc genhtml_branch_coverage=1 00:21:07.343 --rc genhtml_function_coverage=1 00:21:07.343 --rc genhtml_legend=1 00:21:07.343 --rc geninfo_all_blocks=1 00:21:07.343 --rc geninfo_unexecuted_blocks=1 00:21:07.343 00:21:07.343 ' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.343 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77773 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77773 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77773 ']' 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:21:07.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:07.344 17:55:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:07.611 [2024-11-20 17:55:34.542914] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:21:07.612 [2024-11-20 17:55:34.543051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77773 ] 00:21:07.612 [2024-11-20 17:55:34.729610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.872 [2024-11-20 17:55:34.836760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:08.466 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:08.726 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:08.985 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:08.985 { 00:21:08.985 "name": "nvme0n1", 00:21:08.985 "aliases": [ 00:21:08.985 "2ca4118d-d618-4309-ac21-917740f53977" 00:21:08.985 ], 00:21:08.985 "product_name": "NVMe disk", 00:21:08.985 "block_size": 4096, 00:21:08.985 "num_blocks": 1310720, 00:21:08.985 "uuid": "2ca4118d-d618-4309-ac21-917740f53977", 00:21:08.985 "numa_id": -1, 00:21:08.985 "assigned_rate_limits": { 00:21:08.985 "rw_ios_per_sec": 0, 00:21:08.985 "rw_mbytes_per_sec": 0, 00:21:08.985 "r_mbytes_per_sec": 0, 00:21:08.985 "w_mbytes_per_sec": 0 00:21:08.985 }, 00:21:08.985 "claimed": true, 00:21:08.985 "claim_type": "read_many_write_one", 00:21:08.985 "zoned": false, 00:21:08.985 "supported_io_types": { 00:21:08.985 "read": true, 00:21:08.985 "write": true, 00:21:08.985 "unmap": true, 00:21:08.985 "flush": true, 00:21:08.985 "reset": true, 00:21:08.985 "nvme_admin": true, 00:21:08.985 "nvme_io": true, 00:21:08.985 "nvme_io_md": false, 00:21:08.985 "write_zeroes": true, 00:21:08.985 "zcopy": false, 00:21:08.985 "get_zone_info": false, 00:21:08.985 "zone_management": false, 00:21:08.985 "zone_append": false, 00:21:08.985 "compare": true, 00:21:08.985 "compare_and_write": false, 00:21:08.985 "abort": true, 00:21:08.985 "seek_hole": false, 00:21:08.985 "seek_data": false, 00:21:08.985 "copy": true, 00:21:08.985 "nvme_iov_md": false 00:21:08.985 }, 00:21:08.985 "driver_specific": { 00:21:08.985 "nvme": [ 00:21:08.985 { 00:21:08.985 "pci_address": "0000:00:11.0", 00:21:08.985 "trid": { 00:21:08.985 "trtype": "PCIe", 00:21:08.985 "traddr": "0000:00:11.0" 00:21:08.985 }, 00:21:08.985 "ctrlr_data": { 00:21:08.985 "cntlid": 0, 00:21:08.985 "vendor_id": "0x1b36", 00:21:08.985 "model_number": "QEMU NVMe Ctrl", 00:21:08.985 "serial_number": "12341", 00:21:08.985 "firmware_revision": "8.0.0", 00:21:08.985 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:08.985 "oacs": { 00:21:08.985 "security": 0, 00:21:08.985 "format": 1, 00:21:08.985 "firmware": 0, 00:21:08.985 "ns_manage": 1 00:21:08.985 }, 00:21:08.985 "multi_ctrlr": false, 00:21:08.985 "ana_reporting": false 00:21:08.985 }, 00:21:08.985 "vs": { 00:21:08.985 "nvme_version": "1.4" 00:21:08.985 }, 00:21:08.985 "ns_data": { 00:21:08.985 "id": 1, 00:21:08.985 "can_share": false 00:21:08.985 } 00:21:08.985 } 00:21:08.985 ], 00:21:08.985 "mp_policy": "active_passive" 00:21:08.985 } 00:21:08.985 } 00:21:08.985 ]' 00:21:08.985 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:08.985 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:08.985 17:55:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:08.985 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:08.986 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:09.245 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c 00:21:09.245 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:21:09.245 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4554cc99-1ef1-406a-b7fd-8c03a2ec3b2c 00:21:09.504 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:09.504 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=676be418-944a-4839-bc2a-714af6befebd 00:21:09.504 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 676be418-944a-4839-bc2a-714af6befebd 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:09.764 17:55:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:10.024 { 00:21:10.024 "name": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:10.024 "aliases": [ 00:21:10.024 "lvs/nvme0n1p0" 00:21:10.024 ], 00:21:10.024 "product_name": "Logical Volume", 00:21:10.024 "block_size": 4096, 00:21:10.024 "num_blocks": 26476544, 00:21:10.024 "uuid": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:10.024 "assigned_rate_limits": { 00:21:10.024 "rw_ios_per_sec": 0, 00:21:10.024 "rw_mbytes_per_sec": 0, 00:21:10.024 "r_mbytes_per_sec": 0, 00:21:10.024 "w_mbytes_per_sec": 0 00:21:10.024 }, 00:21:10.024 "claimed": false, 00:21:10.024 "zoned": false, 00:21:10.024 "supported_io_types": { 00:21:10.024 "read": true, 00:21:10.024 "write": true, 00:21:10.024 "unmap": true, 00:21:10.024 "flush": false, 00:21:10.024 "reset": true, 00:21:10.024 "nvme_admin": false, 00:21:10.024 "nvme_io": false, 00:21:10.024 "nvme_io_md": false, 00:21:10.024 "write_zeroes": true, 00:21:10.024 "zcopy": false, 00:21:10.024 "get_zone_info": false, 00:21:10.024 "zone_management": false, 00:21:10.024 "zone_append": false, 00:21:10.024 "compare": false, 00:21:10.024 "compare_and_write": false, 00:21:10.024 "abort": false, 00:21:10.024 "seek_hole": true, 00:21:10.024 "seek_data": true, 00:21:10.024 "copy": false, 00:21:10.024 "nvme_iov_md": false 00:21:10.024 }, 00:21:10.024 "driver_specific": { 00:21:10.024 "lvol": { 00:21:10.024 "lvol_store_uuid": "676be418-944a-4839-bc2a-714af6befebd", 00:21:10.024 "base_bdev": "nvme0n1", 00:21:10.024 "thin_provision": true, 00:21:10.024 "num_allocated_clusters": 0, 00:21:10.024 "snapshot": false, 00:21:10.024 "clone": false, 00:21:10.024 "esnap_clone": false 00:21:10.024 } 00:21:10.024 } 00:21:10.024 } 00:21:10.024 ]' 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:21:10.024 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:10.284 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.544 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:10.544 { 00:21:10.544 "name": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:10.544 "aliases": [ 00:21:10.544 "lvs/nvme0n1p0" 00:21:10.544 ], 00:21:10.544 "product_name": "Logical Volume", 00:21:10.544 "block_size": 4096, 00:21:10.544 "num_blocks": 26476544, 00:21:10.544 "uuid": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:10.544 "assigned_rate_limits": { 00:21:10.544 "rw_ios_per_sec": 0, 00:21:10.544 "rw_mbytes_per_sec": 0, 00:21:10.544 "r_mbytes_per_sec": 0, 00:21:10.544 "w_mbytes_per_sec": 0 00:21:10.544 }, 00:21:10.544 "claimed": false, 00:21:10.544 "zoned": false, 00:21:10.544 "supported_io_types": { 00:21:10.544 "read": true, 00:21:10.544 "write": true, 00:21:10.544 "unmap": true, 00:21:10.544 "flush": false, 00:21:10.544 "reset": true, 00:21:10.544 "nvme_admin": false, 00:21:10.544 "nvme_io": false, 00:21:10.544 "nvme_io_md": false, 00:21:10.544 "write_zeroes": true, 00:21:10.544 "zcopy": false, 00:21:10.544 "get_zone_info": false, 00:21:10.544 "zone_management": false, 00:21:10.544 "zone_append": false, 00:21:10.544 "compare": false, 00:21:10.544 "compare_and_write": false, 00:21:10.544 "abort": false, 00:21:10.544 "seek_hole": true, 00:21:10.544 "seek_data": true, 00:21:10.544 "copy": false, 00:21:10.544 "nvme_iov_md": false 00:21:10.544 }, 00:21:10.544 "driver_specific": { 00:21:10.544 "lvol": { 00:21:10.544 "lvol_store_uuid": "676be418-944a-4839-bc2a-714af6befebd", 00:21:10.544 "base_bdev": "nvme0n1", 00:21:10.544 "thin_provision": true, 00:21:10.544 "num_allocated_clusters": 0, 00:21:10.544 "snapshot": false, 00:21:10.544 "clone": false, 00:21:10.544 "esnap_clone": false 00:21:10.544 } 00:21:10.544 } 00:21:10.544 } 00:21:10.544 ]' 00:21:10.544 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:10.544 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:10.544 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:21:10.804 17:55:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f50c4146-4cf7-4045-ae01-6a7b3df5864f 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:11.063 { 00:21:11.063 "name": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:11.063 "aliases": [ 00:21:11.063 "lvs/nvme0n1p0" 00:21:11.063 ], 00:21:11.063 "product_name": "Logical Volume", 00:21:11.063 "block_size": 4096, 00:21:11.063 "num_blocks": 26476544, 00:21:11.063 "uuid": "f50c4146-4cf7-4045-ae01-6a7b3df5864f", 00:21:11.063 "assigned_rate_limits": { 00:21:11.063 "rw_ios_per_sec": 0, 00:21:11.063 "rw_mbytes_per_sec": 0, 00:21:11.063 "r_mbytes_per_sec": 0, 00:21:11.063 "w_mbytes_per_sec": 0 00:21:11.063 }, 00:21:11.063 "claimed": false, 00:21:11.063 "zoned": false, 00:21:11.063 "supported_io_types": { 00:21:11.063 "read": true, 00:21:11.063 "write": true, 00:21:11.063 "unmap": true, 00:21:11.063 "flush": false, 00:21:11.063 "reset": true, 00:21:11.063 "nvme_admin": false, 00:21:11.063 "nvme_io": false, 00:21:11.063 "nvme_io_md": false, 00:21:11.063 "write_zeroes": true, 00:21:11.063 "zcopy": false, 00:21:11.063 "get_zone_info": false, 00:21:11.063 "zone_management": false, 00:21:11.063 "zone_append": false, 00:21:11.063 "compare": false, 00:21:11.063 "compare_and_write": false, 00:21:11.063 "abort": false, 00:21:11.063 "seek_hole": true, 00:21:11.063 "seek_data": true, 00:21:11.063 "copy": false, 00:21:11.063 "nvme_iov_md": false 00:21:11.063 }, 00:21:11.063 "driver_specific": { 00:21:11.063 "lvol": { 00:21:11.063 "lvol_store_uuid": "676be418-944a-4839-bc2a-714af6befebd", 00:21:11.063 "base_bdev": "nvme0n1", 00:21:11.063 "thin_provision": true, 00:21:11.063 "num_allocated_clusters": 0, 00:21:11.063 "snapshot": false, 00:21:11.063 "clone": false, 00:21:11.063 "esnap_clone": false 00:21:11.063 } 00:21:11.063 } 00:21:11.063 } 00:21:11.063 ]' 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:21:11.063 17:55:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f50c4146-4cf7-4045-ae01-6a7b3df5864f -c nvc0n1p0 --l2p_dram_limit 20 00:21:11.323 [2024-11-20 17:55:38.394067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.394127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:11.323 [2024-11-20 17:55:38.394143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:11.323 [2024-11-20 17:55:38.394156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.394237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.394259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:11.323 [2024-11-20 17:55:38.394270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:11.323 [2024-11-20 17:55:38.394282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.394308] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:11.323 [2024-11-20 17:55:38.395426] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:11.323 [2024-11-20 17:55:38.395454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.395468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:11.323 [2024-11-20 17:55:38.395479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.152 ms 00:21:11.323 [2024-11-20 17:55:38.395492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.395584] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ebd971be-7814-4c85-8699-d34a37c8600d 00:21:11.323 [2024-11-20 17:55:38.397021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.397058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:11.323 [2024-11-20 17:55:38.397073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:11.323 [2024-11-20 17:55:38.397090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.404456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.404485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:11.323 [2024-11-20 17:55:38.404500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.337 ms 00:21:11.323 [2024-11-20 17:55:38.404510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.404625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.404639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:11.323 [2024-11-20 17:55:38.404657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:21:11.323 [2024-11-20 17:55:38.404668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.404735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.404747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:11.323 [2024-11-20 17:55:38.404761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:11.323 [2024-11-20 17:55:38.404791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.404820] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:11.323 [2024-11-20 17:55:38.410118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.410282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:11.323 [2024-11-20 17:55:38.410303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.320 ms 00:21:11.323 [2024-11-20 17:55:38.410322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.410359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.410374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:11.323 [2024-11-20 17:55:38.410385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:11.323 [2024-11-20 17:55:38.410407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.410457] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:11.323 [2024-11-20 17:55:38.410610] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:11.323 [2024-11-20 17:55:38.410627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:11.323 [2024-11-20 17:55:38.410644] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:11.323 [2024-11-20 17:55:38.410658] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:11.323 [2024-11-20 17:55:38.410673] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:11.323 [2024-11-20 17:55:38.410685] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:11.323 [2024-11-20 17:55:38.410697] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:11.323 [2024-11-20 17:55:38.410708] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:11.323 [2024-11-20 17:55:38.410720] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:11.323 [2024-11-20 17:55:38.410731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.410749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:11.323 [2024-11-20 17:55:38.410760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.290 ms 00:21:11.323 [2024-11-20 17:55:38.410795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.410884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.323 [2024-11-20 17:55:38.410902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:11.323 [2024-11-20 17:55:38.410913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:11.323 [2024-11-20 17:55:38.410929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.323 [2024-11-20 17:55:38.411013] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:11.323 [2024-11-20 17:55:38.411028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:11.323 [2024-11-20 17:55:38.411042] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.323 [2024-11-20 17:55:38.411056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.323 [2024-11-20 17:55:38.411067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:11.323 [2024-11-20 17:55:38.411079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:11.323 [2024-11-20 17:55:38.411089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:11.323 [2024-11-20 17:55:38.411102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:11.323 [2024-11-20 17:55:38.411112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:11.323 [2024-11-20 17:55:38.411124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.323 [2024-11-20 17:55:38.411134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:11.324 [2024-11-20 17:55:38.411148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:11.324 [2024-11-20 17:55:38.411159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:11.324 [2024-11-20 17:55:38.411182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:11.324 [2024-11-20 17:55:38.411192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:11.324 [2024-11-20 17:55:38.411207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:11.324 [2024-11-20 17:55:38.411231] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:11.324 [2024-11-20 17:55:38.411264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:11.324 [2024-11-20 17:55:38.411298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:11.324 [2024-11-20 17:55:38.411329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:11.324 [2024-11-20 17:55:38.411363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:11.324 [2024-11-20 17:55:38.411396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.324 [2024-11-20 17:55:38.411417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:11.324 [2024-11-20 17:55:38.411430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:11.324 [2024-11-20 17:55:38.411439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:11.324 [2024-11-20 17:55:38.411451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:11.324 [2024-11-20 17:55:38.411461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:11.324 [2024-11-20 17:55:38.411473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:11.324 [2024-11-20 17:55:38.411495] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:11.324 [2024-11-20 17:55:38.411504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411523] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:11.324 [2024-11-20 17:55:38.411534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:11.324 [2024-11-20 17:55:38.411549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:11.324 [2024-11-20 17:55:38.411577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:11.324 [2024-11-20 17:55:38.411587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:11.324 [2024-11-20 17:55:38.411599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:11.324 [2024-11-20 17:55:38.411609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:11.324 [2024-11-20 17:55:38.411621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:11.324 [2024-11-20 17:55:38.411632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:11.324 [2024-11-20 17:55:38.411649] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:11.324 [2024-11-20 17:55:38.411662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:11.324 [2024-11-20 17:55:38.411688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:11.324 [2024-11-20 17:55:38.411701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:11.324 [2024-11-20 17:55:38.411712] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:11.324 [2024-11-20 17:55:38.411725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:11.324 [2024-11-20 17:55:38.411736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:11.324 [2024-11-20 17:55:38.411749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:11.324 [2024-11-20 17:55:38.411761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:11.324 [2024-11-20 17:55:38.411786] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:11.324 [2024-11-20 17:55:38.411798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411836] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:11.324 [2024-11-20 17:55:38.411862] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:11.324 [2024-11-20 17:55:38.411874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411889] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:11.324 [2024-11-20 17:55:38.411900] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:11.324 [2024-11-20 17:55:38.411913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:11.324 [2024-11-20 17:55:38.411924] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:11.324 [2024-11-20 17:55:38.411938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:11.324 [2024-11-20 17:55:38.411951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:11.324 [2024-11-20 17:55:38.411965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.981 ms 00:21:11.324 [2024-11-20 17:55:38.411976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:11.324 [2024-11-20 17:55:38.412016] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:11.324 [2024-11-20 17:55:38.412030] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:15.519 [2024-11-20 17:55:42.255342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.255550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:15.519 [2024-11-20 17:55:42.255584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3849.567 ms 00:21:15.519 [2024-11-20 17:55:42.255595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.292257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.292306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:15.519 [2024-11-20 17:55:42.292325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.371 ms 00:21:15.519 [2024-11-20 17:55:42.292352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.292492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.292506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:15.519 [2024-11-20 17:55:42.292522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:21:15.519 [2024-11-20 17:55:42.292533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.357180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.357226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:15.519 [2024-11-20 17:55:42.357244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.693 ms 00:21:15.519 [2024-11-20 17:55:42.357271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.357311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.357325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:15.519 [2024-11-20 17:55:42.357338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:15.519 [2024-11-20 17:55:42.357349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.357853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.357868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:15.519 [2024-11-20 17:55:42.357882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:21:15.519 [2024-11-20 17:55:42.357892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.358002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.358015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:15.519 [2024-11-20 17:55:42.358032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:21:15.519 [2024-11-20 17:55:42.358042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.376218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.376257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:15.519 [2024-11-20 17:55:42.376274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.185 ms 00:21:15.519 [2024-11-20 17:55:42.376301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.387822] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:21:15.519 [2024-11-20 17:55:42.393702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.393740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:15.519 [2024-11-20 17:55:42.393753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.343 ms 00:21:15.519 [2024-11-20 17:55:42.393794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.493202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.493263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:15.519 [2024-11-20 17:55:42.493280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.535 ms 00:21:15.519 [2024-11-20 17:55:42.493294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.493478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.493498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:15.519 [2024-11-20 17:55:42.493509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:21:15.519 [2024-11-20 17:55:42.493522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.529835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.530013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:15.519 [2024-11-20 17:55:42.530035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.317 ms 00:21:15.519 [2024-11-20 17:55:42.530049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.565127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.565171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:15.519 [2024-11-20 17:55:42.565185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.078 ms 00:21:15.519 [2024-11-20 17:55:42.565213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.565947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.565971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:15.519 [2024-11-20 17:55:42.565982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:21:15.519 [2024-11-20 17:55:42.565995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.519 [2024-11-20 17:55:42.669064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.519 [2024-11-20 17:55:42.669123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:15.519 [2024-11-20 17:55:42.669140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 103.183 ms 00:21:15.519 [2024-11-20 17:55:42.669153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-11-20 17:55:42.706986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-11-20 17:55:42.707206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:15.778 [2024-11-20 17:55:42.707233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.813 ms 00:21:15.778 [2024-11-20 17:55:42.707246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-11-20 17:55:42.744618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-11-20 17:55:42.744709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:15.778 [2024-11-20 17:55:42.744725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.391 ms 00:21:15.778 [2024-11-20 17:55:42.744738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-11-20 17:55:42.782127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-11-20 17:55:42.782303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:15.778 [2024-11-20 17:55:42.782326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.385 ms 00:21:15.778 [2024-11-20 17:55:42.782339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-11-20 17:55:42.782381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-11-20 17:55:42.782399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:15.778 [2024-11-20 17:55:42.782410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:15.778 [2024-11-20 17:55:42.782423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.778 [2024-11-20 17:55:42.782551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:15.778 [2024-11-20 17:55:42.782570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:15.778 [2024-11-20 17:55:42.782581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:15.779 [2024-11-20 17:55:42.782593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:15.779 [2024-11-20 17:55:42.783613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4396.230 ms, result 0 00:21:15.779 { 00:21:15.779 "name": "ftl0", 00:21:15.779 "uuid": "ebd971be-7814-4c85-8699-d34a37c8600d" 00:21:15.779 } 00:21:15.779 17:55:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:21:15.779 17:55:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:21:15.779 17:55:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:21:16.038 17:55:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:21:16.038 [2024-11-20 17:55:43.131643] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:16.038 I/O size of 69632 is greater than zero copy threshold (65536). 00:21:16.038 Zero copy mechanism will not be used. 00:21:16.038 Running I/O for 4 seconds... 00:21:18.354 1428.00 IOPS, 94.83 MiB/s [2024-11-20T17:55:46.467Z] 1439.50 IOPS, 95.59 MiB/s [2024-11-20T17:55:47.406Z] 1462.00 IOPS, 97.09 MiB/s [2024-11-20T17:55:47.406Z] 1479.75 IOPS, 98.26 MiB/s 00:21:20.230 Latency(us) 00:21:20.230 [2024-11-20T17:55:47.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.230 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:21:20.230 ftl0 : 4.00 1479.15 98.22 0.00 0.00 709.16 256.62 2289.81 00:21:20.230 [2024-11-20T17:55:47.406Z] =================================================================================================================== 00:21:20.230 [2024-11-20T17:55:47.406Z] Total : 1479.15 98.22 0.00 0.00 709.16 256.62 2289.81 00:21:20.230 [2024-11-20 17:55:47.137017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:20.230 { 00:21:20.230 "results": [ 00:21:20.230 { 00:21:20.230 "job": "ftl0", 00:21:20.230 "core_mask": "0x1", 00:21:20.230 "workload": "randwrite", 00:21:20.230 "status": "finished", 00:21:20.230 "queue_depth": 1, 00:21:20.230 "io_size": 69632, 00:21:20.230 "runtime": 4.002309, 00:21:20.230 "iops": 1479.146162877479, 00:21:20.230 "mibps": 98.22454987858258, 00:21:20.230 "io_failed": 0, 00:21:20.230 "io_timeout": 0, 00:21:20.230 "avg_latency_us": 709.1594572886139, 00:21:20.230 "min_latency_us": 256.61686746987954, 00:21:20.230 "max_latency_us": 2289.8120481927713 00:21:20.230 } 00:21:20.230 ], 00:21:20.230 "core_count": 1 00:21:20.230 } 00:21:20.230 17:55:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:21:20.230 [2024-11-20 17:55:47.270296] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:20.230 Running I/O for 4 seconds... 00:21:22.109 12317.00 IOPS, 48.11 MiB/s [2024-11-20T17:55:50.664Z] 11997.00 IOPS, 46.86 MiB/s [2024-11-20T17:55:51.601Z] 11840.67 IOPS, 46.25 MiB/s [2024-11-20T17:55:51.601Z] 11903.00 IOPS, 46.50 MiB/s 00:21:24.425 Latency(us) 00:21:24.425 [2024-11-20T17:55:51.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.425 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:21:24.425 ftl0 : 4.01 11893.31 46.46 0.00 0.00 10742.69 209.73 23266.60 00:21:24.425 [2024-11-20T17:55:51.601Z] =================================================================================================================== 00:21:24.425 [2024-11-20T17:55:51.601Z] Total : 11893.31 46.46 0.00 0.00 10742.69 0.00 23266.60 00:21:24.425 { 00:21:24.425 "results": [ 00:21:24.425 { 00:21:24.425 "job": "ftl0", 00:21:24.425 "core_mask": "0x1", 00:21:24.425 "workload": "randwrite", 00:21:24.425 "status": "finished", 00:21:24.425 "queue_depth": 128, 00:21:24.425 "io_size": 4096, 00:21:24.425 "runtime": 4.013853, 00:21:24.425 "iops": 11893.310492437067, 00:21:24.425 "mibps": 46.458244111082294, 00:21:24.425 "io_failed": 0, 00:21:24.425 "io_timeout": 0, 00:21:24.425 "avg_latency_us": 10742.692767500517, 00:21:24.425 "min_latency_us": 209.73493975903614, 00:21:24.425 "max_latency_us": 23266.595983935742 00:21:24.425 } 00:21:24.425 ], 00:21:24.425 "core_count": 1 00:21:24.425 } 00:21:24.425 [2024-11-20 17:55:51.287616] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:24.425 17:55:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:21:24.426 [2024-11-20 17:55:51.407858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:21:24.426 Running I/O for 4 seconds... 00:21:26.300 9186.00 IOPS, 35.88 MiB/s [2024-11-20T17:55:54.851Z] 9233.50 IOPS, 36.07 MiB/s [2024-11-20T17:55:55.418Z] 9260.00 IOPS, 36.17 MiB/s [2024-11-20T17:55:55.678Z] 9165.50 IOPS, 35.80 MiB/s 00:21:28.502 Latency(us) 00:21:28.502 [2024-11-20T17:55:55.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:28.502 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:28.502 Verification LBA range: start 0x0 length 0x1400000 00:21:28.502 ftl0 : 4.01 9174.47 35.84 0.00 0.00 13909.52 246.75 18213.22 00:21:28.502 [2024-11-20T17:55:55.678Z] =================================================================================================================== 00:21:28.502 [2024-11-20T17:55:55.678Z] Total : 9174.47 35.84 0.00 0.00 13909.52 0.00 18213.22 00:21:28.502 [2024-11-20 17:55:55.430757] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:21:28.502 { 00:21:28.502 "results": [ 00:21:28.502 { 00:21:28.502 "job": "ftl0", 00:21:28.502 "core_mask": "0x1", 00:21:28.502 "workload": "verify", 00:21:28.502 "status": "finished", 00:21:28.502 "verify_range": { 00:21:28.502 "start": 0, 00:21:28.502 "length": 20971520 00:21:28.502 }, 00:21:28.502 "queue_depth": 128, 00:21:28.502 "io_size": 4096, 00:21:28.502 "runtime": 4.010041, 00:21:28.502 "iops": 9174.46978721664, 00:21:28.502 "mibps": 35.837772606315, 00:21:28.502 "io_failed": 0, 00:21:28.502 "io_timeout": 0, 00:21:28.502 "avg_latency_us": 13909.519841147685, 00:21:28.502 "min_latency_us": 246.74698795180723, 00:21:28.502 "max_latency_us": 18213.21767068273 00:21:28.502 } 00:21:28.502 ], 00:21:28.502 "core_count": 1 00:21:28.502 } 00:21:28.502 17:55:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:21:28.502 [2024-11-20 17:55:55.642221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.502 [2024-11-20 17:55:55.642496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:28.502 [2024-11-20 17:55:55.642521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:28.502 [2024-11-20 17:55:55.642535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.502 [2024-11-20 17:55:55.642578] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:28.502 [2024-11-20 17:55:55.646712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.502 [2024-11-20 17:55:55.646745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:28.502 [2024-11-20 17:55:55.646761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.117 ms 00:21:28.502 [2024-11-20 17:55:55.646787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.502 [2024-11-20 17:55:55.648706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.502 [2024-11-20 17:55:55.648744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:28.502 [2024-11-20 17:55:55.648760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.892 ms 00:21:28.502 [2024-11-20 17:55:55.648782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.761 [2024-11-20 17:55:55.855606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.761 [2024-11-20 17:55:55.855860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:28.761 [2024-11-20 17:55:55.855897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 207.125 ms 00:21:28.761 [2024-11-20 17:55:55.855910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.761 [2024-11-20 17:55:55.860940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.761 [2024-11-20 17:55:55.860974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:28.761 [2024-11-20 17:55:55.860990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.976 ms 00:21:28.761 [2024-11-20 17:55:55.861001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.761 [2024-11-20 17:55:55.897272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.761 [2024-11-20 17:55:55.897315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:28.761 [2024-11-20 17:55:55.897333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.247 ms 00:21:28.761 [2024-11-20 17:55:55.897343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.761 [2024-11-20 17:55:55.919779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.761 [2024-11-20 17:55:55.919824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:28.761 [2024-11-20 17:55:55.919842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.426 ms 00:21:28.761 [2024-11-20 17:55:55.919853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:28.761 [2024-11-20 17:55:55.919995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:28.761 [2024-11-20 17:55:55.920009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:28.761 [2024-11-20 17:55:55.920027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:28.761 [2024-11-20 17:55:55.920037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.022 [2024-11-20 17:55:55.957233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.022 [2024-11-20 17:55:55.957275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:29.022 [2024-11-20 17:55:55.957292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.235 ms 00:21:29.022 [2024-11-20 17:55:55.957318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.022 [2024-11-20 17:55:55.993560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.022 [2024-11-20 17:55:55.993600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:29.022 [2024-11-20 17:55:55.993617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.259 ms 00:21:29.022 [2024-11-20 17:55:55.993643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.022 [2024-11-20 17:55:56.029147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.022 [2024-11-20 17:55:56.029185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:29.022 [2024-11-20 17:55:56.029201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.513 ms 00:21:29.022 [2024-11-20 17:55:56.029227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.022 [2024-11-20 17:55:56.065082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.022 [2024-11-20 17:55:56.065121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:29.022 [2024-11-20 17:55:56.065140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.819 ms 00:21:29.022 [2024-11-20 17:55:56.065149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.022 [2024-11-20 17:55:56.065190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:29.022 [2024-11-20 17:55:56.065207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.065998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.066014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.066025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.066039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.066050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:29.022 [2024-11-20 17:55:56.066063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:29.023 [2024-11-20 17:55:56.066492] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:29.023 [2024-11-20 17:55:56.066505] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ebd971be-7814-4c85-8699-d34a37c8600d 00:21:29.023 [2024-11-20 17:55:56.066516] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:29.023 [2024-11-20 17:55:56.066531] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:29.023 [2024-11-20 17:55:56.066541] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:29.023 [2024-11-20 17:55:56.066554] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:29.023 [2024-11-20 17:55:56.066564] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:29.023 [2024-11-20 17:55:56.066576] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:29.023 [2024-11-20 17:55:56.066586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:29.023 [2024-11-20 17:55:56.066600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:29.023 [2024-11-20 17:55:56.066609] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:29.023 [2024-11-20 17:55:56.066622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.023 [2024-11-20 17:55:56.066632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:29.023 [2024-11-20 17:55:56.066646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.437 ms 00:21:29.023 [2024-11-20 17:55:56.066656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.086679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.023 [2024-11-20 17:55:56.086716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:29.023 [2024-11-20 17:55:56.086731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.003 ms 00:21:29.023 [2024-11-20 17:55:56.086742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.087365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:29.023 [2024-11-20 17:55:56.087387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:29.023 [2024-11-20 17:55:56.087401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:21:29.023 [2024-11-20 17:55:56.087411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.141828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.023 [2024-11-20 17:55:56.141866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:29.023 [2024-11-20 17:55:56.141885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.023 [2024-11-20 17:55:56.141895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.141957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.023 [2024-11-20 17:55:56.141968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:29.023 [2024-11-20 17:55:56.141981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.023 [2024-11-20 17:55:56.141991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.142096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.023 [2024-11-20 17:55:56.142110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:29.023 [2024-11-20 17:55:56.142123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.023 [2024-11-20 17:55:56.142134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.023 [2024-11-20 17:55:56.142163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.023 [2024-11-20 17:55:56.142173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:29.023 [2024-11-20 17:55:56.142186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.023 [2024-11-20 17:55:56.142196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.265153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.265221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:29.282 [2024-11-20 17:55:56.265242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.265254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.365929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.365977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:29.282 [2024-11-20 17:55:56.365993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:29.282 [2024-11-20 17:55:56.366152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:29.282 [2024-11-20 17:55:56.366244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:29.282 [2024-11-20 17:55:56.366398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:29.282 [2024-11-20 17:55:56.366472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:29.282 [2024-11-20 17:55:56.366548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.282 [2024-11-20 17:55:56.366606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:29.282 [2024-11-20 17:55:56.366628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:29.282 [2024-11-20 17:55:56.366642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:29.282 [2024-11-20 17:55:56.366652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:29.283 [2024-11-20 17:55:56.366801] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 725.690 ms, result 0 00:21:29.283 true 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77773 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77773 ']' 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77773 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77773 00:21:29.283 killing process with pid 77773 00:21:29.283 Received shutdown signal, test time was about 4.000000 seconds 00:21:29.283 00:21:29.283 Latency(us) 00:21:29.283 [2024-11-20T17:55:56.459Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.283 [2024-11-20T17:55:56.459Z] =================================================================================================================== 00:21:29.283 [2024-11-20T17:55:56.459Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77773' 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77773 00:21:29.283 17:55:56 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77773 00:21:33.475 Remove shared memory files 00:21:33.475 17:55:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:33.475 17:55:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:21:33.475 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:21:33.476 ************************************ 00:21:33.476 END TEST ftl_bdevperf 00:21:33.476 ************************************ 00:21:33.476 00:21:33.476 real 0m25.777s 00:21:33.476 user 0m28.361s 00:21:33.476 sys 0m1.261s 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.476 17:55:59 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:21:33.476 17:56:00 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:33.476 17:56:00 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:33.476 17:56:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.476 17:56:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:21:33.476 ************************************ 00:21:33.476 START TEST ftl_trim 00:21:33.476 ************************************ 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:21:33.476 * Looking for test storage... 00:21:33.476 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:33.476 17:56:00 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.476 --rc genhtml_branch_coverage=1 00:21:33.476 --rc genhtml_function_coverage=1 00:21:33.476 --rc genhtml_legend=1 00:21:33.476 --rc geninfo_all_blocks=1 00:21:33.476 --rc geninfo_unexecuted_blocks=1 00:21:33.476 00:21:33.476 ' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.476 --rc genhtml_branch_coverage=1 00:21:33.476 --rc genhtml_function_coverage=1 00:21:33.476 --rc genhtml_legend=1 00:21:33.476 --rc geninfo_all_blocks=1 00:21:33.476 --rc geninfo_unexecuted_blocks=1 00:21:33.476 00:21:33.476 ' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.476 --rc genhtml_branch_coverage=1 00:21:33.476 --rc genhtml_function_coverage=1 00:21:33.476 --rc genhtml_legend=1 00:21:33.476 --rc geninfo_all_blocks=1 00:21:33.476 --rc geninfo_unexecuted_blocks=1 00:21:33.476 00:21:33.476 ' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:33.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:33.476 --rc genhtml_branch_coverage=1 00:21:33.476 --rc genhtml_function_coverage=1 00:21:33.476 --rc genhtml_legend=1 00:21:33.476 --rc geninfo_all_blocks=1 00:21:33.476 --rc geninfo_unexecuted_blocks=1 00:21:33.476 00:21:33.476 ' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78141 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:21:33.476 17:56:00 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78141 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78141 ']' 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.476 17:56:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:21:33.476 [2024-11-20 17:56:00.426624] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:21:33.476 [2024-11-20 17:56:00.426740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78141 ] 00:21:33.476 [2024-11-20 17:56:00.609614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:33.735 [2024-11-20 17:56:00.728720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.735 [2024-11-20 17:56:00.728887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:33.735 [2024-11-20 17:56:00.728920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.672 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.672 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:21:34.672 17:56:01 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:21:34.931 17:56:01 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:21:34.931 17:56:01 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:21:34.931 17:56:01 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:21:34.931 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:21:34.931 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:34.931 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:34.931 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:34.931 17:56:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:21:35.190 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:35.190 { 00:21:35.190 "name": "nvme0n1", 00:21:35.190 "aliases": [ 00:21:35.190 "ca1eadea-8807-4b9b-b41a-7a066986c140" 00:21:35.190 ], 00:21:35.190 "product_name": "NVMe disk", 00:21:35.190 "block_size": 4096, 00:21:35.190 "num_blocks": 1310720, 00:21:35.190 "uuid": "ca1eadea-8807-4b9b-b41a-7a066986c140", 00:21:35.190 "numa_id": -1, 00:21:35.190 "assigned_rate_limits": { 00:21:35.190 "rw_ios_per_sec": 0, 00:21:35.190 "rw_mbytes_per_sec": 0, 00:21:35.190 "r_mbytes_per_sec": 0, 00:21:35.190 "w_mbytes_per_sec": 0 00:21:35.190 }, 00:21:35.190 "claimed": true, 00:21:35.190 "claim_type": "read_many_write_one", 00:21:35.190 "zoned": false, 00:21:35.190 "supported_io_types": { 00:21:35.190 "read": true, 00:21:35.190 "write": true, 00:21:35.190 "unmap": true, 00:21:35.190 "flush": true, 00:21:35.190 "reset": true, 00:21:35.190 "nvme_admin": true, 00:21:35.190 "nvme_io": true, 00:21:35.190 "nvme_io_md": false, 00:21:35.190 "write_zeroes": true, 00:21:35.190 "zcopy": false, 00:21:35.190 "get_zone_info": false, 00:21:35.190 "zone_management": false, 00:21:35.190 "zone_append": false, 00:21:35.190 "compare": true, 00:21:35.190 "compare_and_write": false, 00:21:35.190 "abort": true, 00:21:35.190 "seek_hole": false, 00:21:35.190 "seek_data": false, 00:21:35.190 "copy": true, 00:21:35.190 "nvme_iov_md": false 00:21:35.190 }, 00:21:35.190 "driver_specific": { 00:21:35.190 "nvme": [ 00:21:35.190 { 00:21:35.190 "pci_address": "0000:00:11.0", 00:21:35.190 "trid": { 00:21:35.190 "trtype": "PCIe", 00:21:35.190 "traddr": "0000:00:11.0" 00:21:35.190 }, 00:21:35.191 "ctrlr_data": { 00:21:35.191 "cntlid": 0, 00:21:35.191 "vendor_id": "0x1b36", 00:21:35.191 "model_number": "QEMU NVMe Ctrl", 00:21:35.191 "serial_number": "12341", 00:21:35.191 "firmware_revision": "8.0.0", 00:21:35.191 "subnqn": "nqn.2019-08.org.qemu:12341", 00:21:35.191 "oacs": { 00:21:35.191 "security": 0, 00:21:35.191 "format": 1, 00:21:35.191 "firmware": 0, 00:21:35.191 "ns_manage": 1 00:21:35.191 }, 00:21:35.191 "multi_ctrlr": false, 00:21:35.191 "ana_reporting": false 00:21:35.191 }, 00:21:35.191 "vs": { 00:21:35.191 "nvme_version": "1.4" 00:21:35.191 }, 00:21:35.191 "ns_data": { 00:21:35.191 "id": 1, 00:21:35.191 "can_share": false 00:21:35.191 } 00:21:35.191 } 00:21:35.191 ], 00:21:35.191 "mp_policy": "active_passive" 00:21:35.191 } 00:21:35.191 } 00:21:35.191 ]' 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:21:35.191 17:56:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:21:35.191 17:56:02 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:21:35.191 17:56:02 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:21:35.191 17:56:02 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:21:35.191 17:56:02 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:21:35.191 17:56:02 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:35.450 17:56:02 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=676be418-944a-4839-bc2a-714af6befebd 00:21:35.450 17:56:02 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:21:35.450 17:56:02 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 676be418-944a-4839-bc2a-714af6befebd 00:21:35.709 17:56:02 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:21:35.709 17:56:02 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=4194be54-3e5a-45b3-a54b-98b81a8b0659 00:21:35.709 17:56:02 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 4194be54-3e5a-45b3-a54b-98b81a8b0659 00:21:35.967 17:56:03 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:35.967 17:56:03 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:35.968 17:56:03 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:21:35.968 17:56:03 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:21:35.968 17:56:03 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:35.968 17:56:03 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:21:35.968 17:56:03 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:35.968 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:35.968 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:35.968 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:35.968 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:35.968 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:36.227 { 00:21:36.227 "name": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:36.227 "aliases": [ 00:21:36.227 "lvs/nvme0n1p0" 00:21:36.227 ], 00:21:36.227 "product_name": "Logical Volume", 00:21:36.227 "block_size": 4096, 00:21:36.227 "num_blocks": 26476544, 00:21:36.227 "uuid": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:36.227 "assigned_rate_limits": { 00:21:36.227 "rw_ios_per_sec": 0, 00:21:36.227 "rw_mbytes_per_sec": 0, 00:21:36.227 "r_mbytes_per_sec": 0, 00:21:36.227 "w_mbytes_per_sec": 0 00:21:36.227 }, 00:21:36.227 "claimed": false, 00:21:36.227 "zoned": false, 00:21:36.227 "supported_io_types": { 00:21:36.227 "read": true, 00:21:36.227 "write": true, 00:21:36.227 "unmap": true, 00:21:36.227 "flush": false, 00:21:36.227 "reset": true, 00:21:36.227 "nvme_admin": false, 00:21:36.227 "nvme_io": false, 00:21:36.227 "nvme_io_md": false, 00:21:36.227 "write_zeroes": true, 00:21:36.227 "zcopy": false, 00:21:36.227 "get_zone_info": false, 00:21:36.227 "zone_management": false, 00:21:36.227 "zone_append": false, 00:21:36.227 "compare": false, 00:21:36.227 "compare_and_write": false, 00:21:36.227 "abort": false, 00:21:36.227 "seek_hole": true, 00:21:36.227 "seek_data": true, 00:21:36.227 "copy": false, 00:21:36.227 "nvme_iov_md": false 00:21:36.227 }, 00:21:36.227 "driver_specific": { 00:21:36.227 "lvol": { 00:21:36.227 "lvol_store_uuid": "4194be54-3e5a-45b3-a54b-98b81a8b0659", 00:21:36.227 "base_bdev": "nvme0n1", 00:21:36.227 "thin_provision": true, 00:21:36.227 "num_allocated_clusters": 0, 00:21:36.227 "snapshot": false, 00:21:36.227 "clone": false, 00:21:36.227 "esnap_clone": false 00:21:36.227 } 00:21:36.227 } 00:21:36.227 } 00:21:36.227 ]' 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:36.227 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:36.227 17:56:03 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:21:36.227 17:56:03 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:21:36.227 17:56:03 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:21:36.486 17:56:03 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:21:36.486 17:56:03 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:21:36.486 17:56:03 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:36.486 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:36.486 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:36.486 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:36.486 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:36.486 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:37.096 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:37.096 { 00:21:37.096 "name": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:37.096 "aliases": [ 00:21:37.096 "lvs/nvme0n1p0" 00:21:37.096 ], 00:21:37.096 "product_name": "Logical Volume", 00:21:37.096 "block_size": 4096, 00:21:37.096 "num_blocks": 26476544, 00:21:37.096 "uuid": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:37.096 "assigned_rate_limits": { 00:21:37.096 "rw_ios_per_sec": 0, 00:21:37.096 "rw_mbytes_per_sec": 0, 00:21:37.096 "r_mbytes_per_sec": 0, 00:21:37.096 "w_mbytes_per_sec": 0 00:21:37.096 }, 00:21:37.096 "claimed": false, 00:21:37.096 "zoned": false, 00:21:37.096 "supported_io_types": { 00:21:37.096 "read": true, 00:21:37.096 "write": true, 00:21:37.096 "unmap": true, 00:21:37.096 "flush": false, 00:21:37.096 "reset": true, 00:21:37.096 "nvme_admin": false, 00:21:37.096 "nvme_io": false, 00:21:37.096 "nvme_io_md": false, 00:21:37.096 "write_zeroes": true, 00:21:37.096 "zcopy": false, 00:21:37.096 "get_zone_info": false, 00:21:37.096 "zone_management": false, 00:21:37.096 "zone_append": false, 00:21:37.096 "compare": false, 00:21:37.096 "compare_and_write": false, 00:21:37.096 "abort": false, 00:21:37.096 "seek_hole": true, 00:21:37.096 "seek_data": true, 00:21:37.096 "copy": false, 00:21:37.096 "nvme_iov_md": false 00:21:37.096 }, 00:21:37.096 "driver_specific": { 00:21:37.096 "lvol": { 00:21:37.096 "lvol_store_uuid": "4194be54-3e5a-45b3-a54b-98b81a8b0659", 00:21:37.096 "base_bdev": "nvme0n1", 00:21:37.096 "thin_provision": true, 00:21:37.096 "num_allocated_clusters": 0, 00:21:37.096 "snapshot": false, 00:21:37.096 "clone": false, 00:21:37.096 "esnap_clone": false 00:21:37.096 } 00:21:37.096 } 00:21:37.096 } 00:21:37.096 ]' 00:21:37.096 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:37.096 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:37.096 17:56:03 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:37.096 17:56:04 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:21:37.096 17:56:04 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:21:37.096 17:56:04 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:21:37.096 17:56:04 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:21:37.096 17:56:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:21:37.096 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 60f31de7-044d-4af0-b080-6b6fab8e3621 00:21:37.407 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:21:37.407 { 00:21:37.407 "name": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:37.407 "aliases": [ 00:21:37.407 "lvs/nvme0n1p0" 00:21:37.407 ], 00:21:37.407 "product_name": "Logical Volume", 00:21:37.407 "block_size": 4096, 00:21:37.407 "num_blocks": 26476544, 00:21:37.407 "uuid": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:37.407 "assigned_rate_limits": { 00:21:37.407 "rw_ios_per_sec": 0, 00:21:37.407 "rw_mbytes_per_sec": 0, 00:21:37.407 "r_mbytes_per_sec": 0, 00:21:37.407 "w_mbytes_per_sec": 0 00:21:37.407 }, 00:21:37.407 "claimed": false, 00:21:37.407 "zoned": false, 00:21:37.407 "supported_io_types": { 00:21:37.407 "read": true, 00:21:37.407 "write": true, 00:21:37.407 "unmap": true, 00:21:37.407 "flush": false, 00:21:37.407 "reset": true, 00:21:37.407 "nvme_admin": false, 00:21:37.407 "nvme_io": false, 00:21:37.407 "nvme_io_md": false, 00:21:37.407 "write_zeroes": true, 00:21:37.407 "zcopy": false, 00:21:37.407 "get_zone_info": false, 00:21:37.407 "zone_management": false, 00:21:37.407 "zone_append": false, 00:21:37.407 "compare": false, 00:21:37.407 "compare_and_write": false, 00:21:37.407 "abort": false, 00:21:37.407 "seek_hole": true, 00:21:37.407 "seek_data": true, 00:21:37.407 "copy": false, 00:21:37.407 "nvme_iov_md": false 00:21:37.407 }, 00:21:37.407 "driver_specific": { 00:21:37.407 "lvol": { 00:21:37.407 "lvol_store_uuid": "4194be54-3e5a-45b3-a54b-98b81a8b0659", 00:21:37.407 "base_bdev": "nvme0n1", 00:21:37.407 "thin_provision": true, 00:21:37.407 "num_allocated_clusters": 0, 00:21:37.407 "snapshot": false, 00:21:37.407 "clone": false, 00:21:37.407 "esnap_clone": false 00:21:37.407 } 00:21:37.407 } 00:21:37.407 } 00:21:37.407 ]' 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:21:37.408 17:56:04 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:21:37.408 17:56:04 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:21:37.408 17:56:04 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 60f31de7-044d-4af0-b080-6b6fab8e3621 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:21:37.667 [2024-11-20 17:56:04.679186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.679241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:37.667 [2024-11-20 17:56:04.679260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:37.667 [2024-11-20 17:56:04.679271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.682591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.682799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:37.667 [2024-11-20 17:56:04.682827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.288 ms 00:21:37.667 [2024-11-20 17:56:04.682839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.682971] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:37.667 [2024-11-20 17:56:04.684002] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:37.667 [2024-11-20 17:56:04.684042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.684054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:37.667 [2024-11-20 17:56:04.684067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.081 ms 00:21:37.667 [2024-11-20 17:56:04.684078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.684192] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:21:37.667 [2024-11-20 17:56:04.685686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.685726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:21:37.667 [2024-11-20 17:56:04.685739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:37.667 [2024-11-20 17:56:04.685753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.693278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.693316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:37.667 [2024-11-20 17:56:04.693331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.440 ms 00:21:37.667 [2024-11-20 17:56:04.693344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.693501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.693520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:37.667 [2024-11-20 17:56:04.693532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:37.667 [2024-11-20 17:56:04.693550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.693592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.693606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:37.667 [2024-11-20 17:56:04.693616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:37.667 [2024-11-20 17:56:04.693634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.693678] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:37.667 [2024-11-20 17:56:04.698653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.698690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:37.667 [2024-11-20 17:56:04.698706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:21:37.667 [2024-11-20 17:56:04.698716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.698793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.667 [2024-11-20 17:56:04.698807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:37.667 [2024-11-20 17:56:04.698821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:37.667 [2024-11-20 17:56:04.698849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.667 [2024-11-20 17:56:04.698884] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:21:37.667 [2024-11-20 17:56:04.699010] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:37.667 [2024-11-20 17:56:04.699031] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:37.668 [2024-11-20 17:56:04.699046] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:37.668 [2024-11-20 17:56:04.699061] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699074] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699089] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:37.668 [2024-11-20 17:56:04.699100] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:37.668 [2024-11-20 17:56:04.699113] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:37.668 [2024-11-20 17:56:04.699126] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:37.668 [2024-11-20 17:56:04.699139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.668 [2024-11-20 17:56:04.699149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:37.668 [2024-11-20 17:56:04.699164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.258 ms 00:21:37.668 [2024-11-20 17:56:04.699175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.668 [2024-11-20 17:56:04.699261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.668 [2024-11-20 17:56:04.699273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:37.668 [2024-11-20 17:56:04.699286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:37.668 [2024-11-20 17:56:04.699297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.668 [2024-11-20 17:56:04.699414] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:37.668 [2024-11-20 17:56:04.699427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:37.668 [2024-11-20 17:56:04.699440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:37.668 [2024-11-20 17:56:04.699487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:37.668 [2024-11-20 17:56:04.699524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.668 [2024-11-20 17:56:04.699545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:37.668 [2024-11-20 17:56:04.699556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:37.668 [2024-11-20 17:56:04.699567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:37.668 [2024-11-20 17:56:04.699577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:37.668 [2024-11-20 17:56:04.699590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:37.668 [2024-11-20 17:56:04.699600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:37.668 [2024-11-20 17:56:04.699623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:37.668 [2024-11-20 17:56:04.699657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:37.668 [2024-11-20 17:56:04.699687] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699709] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:37.668 [2024-11-20 17:56:04.699720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699741] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:37.668 [2024-11-20 17:56:04.699750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699761] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:37.668 [2024-11-20 17:56:04.699796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.668 [2024-11-20 17:56:04.699817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:37.668 [2024-11-20 17:56:04.699827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:37.668 [2024-11-20 17:56:04.699839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:37.668 [2024-11-20 17:56:04.699849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:37.668 [2024-11-20 17:56:04.699860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:37.668 [2024-11-20 17:56:04.699870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:37.668 [2024-11-20 17:56:04.699890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:37.668 [2024-11-20 17:56:04.699902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699913] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:37.668 [2024-11-20 17:56:04.699926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:37.668 [2024-11-20 17:56:04.699936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:37.668 [2024-11-20 17:56:04.699953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:37.668 [2024-11-20 17:56:04.699965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:37.668 [2024-11-20 17:56:04.699979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:37.668 [2024-11-20 17:56:04.699988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:37.668 [2024-11-20 17:56:04.700000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:37.668 [2024-11-20 17:56:04.700009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:37.668 [2024-11-20 17:56:04.700021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:37.668 [2024-11-20 17:56:04.700035] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:37.668 [2024-11-20 17:56:04.700050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:37.668 [2024-11-20 17:56:04.700078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:37.668 [2024-11-20 17:56:04.700088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:37.668 [2024-11-20 17:56:04.700101] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:37.668 [2024-11-20 17:56:04.700111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:37.668 [2024-11-20 17:56:04.700124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:37.668 [2024-11-20 17:56:04.700134] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:37.668 [2024-11-20 17:56:04.700147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:37.668 [2024-11-20 17:56:04.700157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:37.668 [2024-11-20 17:56:04.700172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700182] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:37.668 [2024-11-20 17:56:04.700231] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:37.668 [2024-11-20 17:56:04.700249] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:37.668 [2024-11-20 17:56:04.700273] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:37.668 [2024-11-20 17:56:04.700283] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:37.668 [2024-11-20 17:56:04.700296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:37.668 [2024-11-20 17:56:04.700307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:37.668 [2024-11-20 17:56:04.700320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:37.668 [2024-11-20 17:56:04.700331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:37.668 [2024-11-20 17:56:04.700347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:37.668 [2024-11-20 17:56:04.700434] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:21:37.668 [2024-11-20 17:56:04.700452] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:21:41.863 [2024-11-20 17:56:08.651921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.863 [2024-11-20 17:56:08.652151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:21:41.863 [2024-11-20 17:56:08.652178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3957.905 ms 00:21:41.864 [2024-11-20 17:56:08.652193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.693383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.693441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:41.864 [2024-11-20 17:56:08.693457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.877 ms 00:21:41.864 [2024-11-20 17:56:08.693470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.693637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.693663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:41.864 [2024-11-20 17:56:08.693675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:21:41.864 [2024-11-20 17:56:08.693691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.754522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.754581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:41.864 [2024-11-20 17:56:08.754597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.865 ms 00:21:41.864 [2024-11-20 17:56:08.754611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.754749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.754783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:41.864 [2024-11-20 17:56:08.754796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:41.864 [2024-11-20 17:56:08.754809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.755262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.755290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:41.864 [2024-11-20 17:56:08.755302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:21:41.864 [2024-11-20 17:56:08.755314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.755430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.755445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:41.864 [2024-11-20 17:56:08.755456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:21:41.864 [2024-11-20 17:56:08.755471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.777356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.777578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:41.864 [2024-11-20 17:56:08.777604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.866 ms 00:21:41.864 [2024-11-20 17:56:08.777621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.790510] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:41.864 [2024-11-20 17:56:08.807196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.807257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:41.864 [2024-11-20 17:56:08.807277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.433 ms 00:21:41.864 [2024-11-20 17:56:08.807288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.914802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.914874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:21:41.864 [2024-11-20 17:56:08.914896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 107.562 ms 00:21:41.864 [2024-11-20 17:56:08.914907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.915160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.915175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:41.864 [2024-11-20 17:56:08.915196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:21:41.864 [2024-11-20 17:56:08.915207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.953625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.953688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:21:41.864 [2024-11-20 17:56:08.953712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.434 ms 00:21:41.864 [2024-11-20 17:56:08.953723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.990923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.990974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:21:41.864 [2024-11-20 17:56:08.990997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.124 ms 00:21:41.864 [2024-11-20 17:56:08.991008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.864 [2024-11-20 17:56:08.991892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.864 [2024-11-20 17:56:08.991916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:41.864 [2024-11-20 17:56:08.991932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.790 ms 00:21:41.864 [2024-11-20 17:56:08.991943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.123 [2024-11-20 17:56:09.097272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.123 [2024-11-20 17:56:09.097522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:21:42.123 [2024-11-20 17:56:09.097556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 105.454 ms 00:21:42.123 [2024-11-20 17:56:09.097567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.123 [2024-11-20 17:56:09.135917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.123 [2024-11-20 17:56:09.136112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:21:42.123 [2024-11-20 17:56:09.136142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.261 ms 00:21:42.123 [2024-11-20 17:56:09.136154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.124 [2024-11-20 17:56:09.174759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.124 [2024-11-20 17:56:09.174973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:21:42.124 [2024-11-20 17:56:09.175002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.566 ms 00:21:42.124 [2024-11-20 17:56:09.175013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.124 [2024-11-20 17:56:09.213208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.124 [2024-11-20 17:56:09.213274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:42.124 [2024-11-20 17:56:09.213295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.140 ms 00:21:42.124 [2024-11-20 17:56:09.213322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.124 [2024-11-20 17:56:09.213429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.124 [2024-11-20 17:56:09.213449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:42.124 [2024-11-20 17:56:09.213473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:42.124 [2024-11-20 17:56:09.213484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.124 [2024-11-20 17:56:09.213582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.124 [2024-11-20 17:56:09.213594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:42.124 [2024-11-20 17:56:09.213611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:21:42.124 [2024-11-20 17:56:09.213622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.124 [2024-11-20 17:56:09.214641] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:42.124 [2024-11-20 17:56:09.219080] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4542.489 ms, result 0 00:21:42.124 [2024-11-20 17:56:09.220149] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:42.124 { 00:21:42.124 "name": "ftl0", 00:21:42.124 "uuid": "5259f8b1-3ab3-43ba-9a28-e53cd5fd0400" 00:21:42.124 } 00:21:42.124 17:56:09 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:21:42.124 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:21:42.382 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:21:42.641 [ 00:21:42.641 { 00:21:42.641 "name": "ftl0", 00:21:42.641 "aliases": [ 00:21:42.641 "5259f8b1-3ab3-43ba-9a28-e53cd5fd0400" 00:21:42.641 ], 00:21:42.641 "product_name": "FTL disk", 00:21:42.641 "block_size": 4096, 00:21:42.641 "num_blocks": 23592960, 00:21:42.641 "uuid": "5259f8b1-3ab3-43ba-9a28-e53cd5fd0400", 00:21:42.641 "assigned_rate_limits": { 00:21:42.641 "rw_ios_per_sec": 0, 00:21:42.641 "rw_mbytes_per_sec": 0, 00:21:42.641 "r_mbytes_per_sec": 0, 00:21:42.641 "w_mbytes_per_sec": 0 00:21:42.641 }, 00:21:42.641 "claimed": false, 00:21:42.641 "zoned": false, 00:21:42.641 "supported_io_types": { 00:21:42.641 "read": true, 00:21:42.641 "write": true, 00:21:42.641 "unmap": true, 00:21:42.641 "flush": true, 00:21:42.641 "reset": false, 00:21:42.641 "nvme_admin": false, 00:21:42.641 "nvme_io": false, 00:21:42.641 "nvme_io_md": false, 00:21:42.641 "write_zeroes": true, 00:21:42.641 "zcopy": false, 00:21:42.641 "get_zone_info": false, 00:21:42.641 "zone_management": false, 00:21:42.641 "zone_append": false, 00:21:42.641 "compare": false, 00:21:42.641 "compare_and_write": false, 00:21:42.641 "abort": false, 00:21:42.641 "seek_hole": false, 00:21:42.641 "seek_data": false, 00:21:42.641 "copy": false, 00:21:42.641 "nvme_iov_md": false 00:21:42.641 }, 00:21:42.641 "driver_specific": { 00:21:42.641 "ftl": { 00:21:42.641 "base_bdev": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:42.641 "cache": "nvc0n1p0" 00:21:42.641 } 00:21:42.641 } 00:21:42.641 } 00:21:42.641 ] 00:21:42.641 17:56:09 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:21:42.641 17:56:09 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:21:42.641 17:56:09 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:21:42.900 17:56:09 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:21:42.900 17:56:09 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:21:43.159 17:56:10 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:21:43.159 { 00:21:43.159 "name": "ftl0", 00:21:43.159 "aliases": [ 00:21:43.159 "5259f8b1-3ab3-43ba-9a28-e53cd5fd0400" 00:21:43.159 ], 00:21:43.159 "product_name": "FTL disk", 00:21:43.159 "block_size": 4096, 00:21:43.159 "num_blocks": 23592960, 00:21:43.159 "uuid": "5259f8b1-3ab3-43ba-9a28-e53cd5fd0400", 00:21:43.159 "assigned_rate_limits": { 00:21:43.159 "rw_ios_per_sec": 0, 00:21:43.159 "rw_mbytes_per_sec": 0, 00:21:43.159 "r_mbytes_per_sec": 0, 00:21:43.159 "w_mbytes_per_sec": 0 00:21:43.159 }, 00:21:43.159 "claimed": false, 00:21:43.159 "zoned": false, 00:21:43.159 "supported_io_types": { 00:21:43.159 "read": true, 00:21:43.159 "write": true, 00:21:43.159 "unmap": true, 00:21:43.159 "flush": true, 00:21:43.159 "reset": false, 00:21:43.159 "nvme_admin": false, 00:21:43.159 "nvme_io": false, 00:21:43.159 "nvme_io_md": false, 00:21:43.159 "write_zeroes": true, 00:21:43.159 "zcopy": false, 00:21:43.159 "get_zone_info": false, 00:21:43.159 "zone_management": false, 00:21:43.159 "zone_append": false, 00:21:43.159 "compare": false, 00:21:43.159 "compare_and_write": false, 00:21:43.159 "abort": false, 00:21:43.159 "seek_hole": false, 00:21:43.159 "seek_data": false, 00:21:43.159 "copy": false, 00:21:43.159 "nvme_iov_md": false 00:21:43.159 }, 00:21:43.159 "driver_specific": { 00:21:43.159 "ftl": { 00:21:43.159 "base_bdev": "60f31de7-044d-4af0-b080-6b6fab8e3621", 00:21:43.159 "cache": "nvc0n1p0" 00:21:43.159 } 00:21:43.159 } 00:21:43.159 } 00:21:43.159 ]' 00:21:43.159 17:56:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:21:43.159 17:56:10 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:21:43.159 17:56:10 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:21:43.420 [2024-11-20 17:56:10.339552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.339609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:43.421 [2024-11-20 17:56:10.339632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:43.421 [2024-11-20 17:56:10.339653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.339692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:43.421 [2024-11-20 17:56:10.343879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.343912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:43.421 [2024-11-20 17:56:10.343936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.164 ms 00:21:43.421 [2024-11-20 17:56:10.343947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.344511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.344530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:43.421 [2024-11-20 17:56:10.344547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.481 ms 00:21:43.421 [2024-11-20 17:56:10.344557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.347395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.347426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:43.421 [2024-11-20 17:56:10.347442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.806 ms 00:21:43.421 [2024-11-20 17:56:10.347453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.353099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.353277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:43.421 [2024-11-20 17:56:10.353310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:21:43.421 [2024-11-20 17:56:10.353321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.390543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.390585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:43.421 [2024-11-20 17:56:10.390610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.176 ms 00:21:43.421 [2024-11-20 17:56:10.390621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.412854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.412893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:43.421 [2024-11-20 17:56:10.412914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.171 ms 00:21:43.421 [2024-11-20 17:56:10.412930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.413158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.413173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:43.421 [2024-11-20 17:56:10.413190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:21:43.421 [2024-11-20 17:56:10.413200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.449881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.449922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:43.421 [2024-11-20 17:56:10.449941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.700 ms 00:21:43.421 [2024-11-20 17:56:10.449951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.486451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.486490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:43.421 [2024-11-20 17:56:10.486514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.466 ms 00:21:43.421 [2024-11-20 17:56:10.486524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.522473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.522512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:43.421 [2024-11-20 17:56:10.522530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.884 ms 00:21:43.421 [2024-11-20 17:56:10.522541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.557892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.421 [2024-11-20 17:56:10.557941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:43.421 [2024-11-20 17:56:10.557962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.258 ms 00:21:43.421 [2024-11-20 17:56:10.557972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.421 [2024-11-20 17:56:10.558064] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:43.421 [2024-11-20 17:56:10.558083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:43.421 [2024-11-20 17:56:10.558716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.558987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:43.422 [2024-11-20 17:56:10.559484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:43.422 [2024-11-20 17:56:10.559499] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:21:43.422 [2024-11-20 17:56:10.559510] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:43.422 [2024-11-20 17:56:10.559522] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:43.422 [2024-11-20 17:56:10.559532] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:43.422 [2024-11-20 17:56:10.559548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:43.422 [2024-11-20 17:56:10.559559] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:43.422 [2024-11-20 17:56:10.559571] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:43.422 [2024-11-20 17:56:10.559581] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:43.422 [2024-11-20 17:56:10.559593] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:43.422 [2024-11-20 17:56:10.559602] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:43.422 [2024-11-20 17:56:10.559614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.422 [2024-11-20 17:56:10.559625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:43.422 [2024-11-20 17:56:10.559638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.559 ms 00:21:43.422 [2024-11-20 17:56:10.559649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.422 [2024-11-20 17:56:10.579860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.422 [2024-11-20 17:56:10.580015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:43.422 [2024-11-20 17:56:10.580042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.203 ms 00:21:43.422 [2024-11-20 17:56:10.580054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.422 [2024-11-20 17:56:10.580643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:43.422 [2024-11-20 17:56:10.580666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:43.422 [2024-11-20 17:56:10.580680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:21:43.422 [2024-11-20 17:56:10.580690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.682 [2024-11-20 17:56:10.651679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.682 [2024-11-20 17:56:10.651733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:43.682 [2024-11-20 17:56:10.651752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.682 [2024-11-20 17:56:10.651763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.682 [2024-11-20 17:56:10.651949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.682 [2024-11-20 17:56:10.651963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:43.682 [2024-11-20 17:56:10.651980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.682 [2024-11-20 17:56:10.651990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.682 [2024-11-20 17:56:10.652072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.682 [2024-11-20 17:56:10.652086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:43.682 [2024-11-20 17:56:10.652112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.682 [2024-11-20 17:56:10.652122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.682 [2024-11-20 17:56:10.652164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.682 [2024-11-20 17:56:10.652175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:43.682 [2024-11-20 17:56:10.652190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.682 [2024-11-20 17:56:10.652201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.682 [2024-11-20 17:56:10.786283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.682 [2024-11-20 17:56:10.786474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:43.682 [2024-11-20 17:56:10.786508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.683 [2024-11-20 17:56:10.786520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.890398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.890455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:43.942 [2024-11-20 17:56:10.890476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.890487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.890626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.890639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:43.942 [2024-11-20 17:56:10.890679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.890695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.890755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.890780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:43.942 [2024-11-20 17:56:10.890796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.890806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.890959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.890973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:43.942 [2024-11-20 17:56:10.890989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.891006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.891075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.891088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:43.942 [2024-11-20 17:56:10.891103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.891114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.891180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.891192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:43.942 [2024-11-20 17:56:10.891211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.891222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.891288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:43.942 [2024-11-20 17:56:10.891305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:43.942 [2024-11-20 17:56:10.891320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:43.942 [2024-11-20 17:56:10.891331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:43.942 [2024-11-20 17:56:10.891536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.855 ms, result 0 00:21:43.942 true 00:21:43.942 17:56:10 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78141 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78141 ']' 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78141 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78141 00:21:43.942 killing process with pid 78141 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78141' 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78141 00:21:43.942 17:56:10 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78141 00:21:46.478 17:56:13 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:21:47.416 65536+0 records in 00:21:47.416 65536+0 records out 00:21:47.416 268435456 bytes (268 MB, 256 MiB) copied, 1.0173 s, 264 MB/s 00:21:47.416 17:56:14 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:47.416 [2024-11-20 17:56:14.526872] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:21:47.416 [2024-11-20 17:56:14.526999] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78355 ] 00:21:47.689 [2024-11-20 17:56:14.716104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.689 [2024-11-20 17:56:14.829883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.293 [2024-11-20 17:56:15.199524] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.293 [2024-11-20 17:56:15.199594] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.293 [2024-11-20 17:56:15.370609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.370659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.293 [2024-11-20 17:56:15.370675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:48.293 [2024-11-20 17:56:15.370686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.374685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.374857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.293 [2024-11-20 17:56:15.374879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.984 ms 00:21:48.293 [2024-11-20 17:56:15.374891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.375161] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.293 [2024-11-20 17:56:15.376198] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.293 [2024-11-20 17:56:15.376233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.376244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.293 [2024-11-20 17:56:15.376255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.085 ms 00:21:48.293 [2024-11-20 17:56:15.376265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.377738] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.293 [2024-11-20 17:56:15.398046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.398092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.293 [2024-11-20 17:56:15.398107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.342 ms 00:21:48.293 [2024-11-20 17:56:15.398118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.398220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.398244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.293 [2024-11-20 17:56:15.398256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:21:48.293 [2024-11-20 17:56:15.398265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.404991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.405154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.293 [2024-11-20 17:56:15.405174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:21:48.293 [2024-11-20 17:56:15.405185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.293 [2024-11-20 17:56:15.405291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.293 [2024-11-20 17:56:15.405305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.293 [2024-11-20 17:56:15.405316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:48.293 [2024-11-20 17:56:15.405326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.405356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-11-20 17:56:15.405372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.294 [2024-11-20 17:56:15.405382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:48.294 [2024-11-20 17:56:15.405392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.405414] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:21:48.294 [2024-11-20 17:56:15.410202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-11-20 17:56:15.410234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.294 [2024-11-20 17:56:15.410246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.801 ms 00:21:48.294 [2024-11-20 17:56:15.410256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.410323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-11-20 17:56:15.410336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.294 [2024-11-20 17:56:15.410348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:48.294 [2024-11-20 17:56:15.410357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.410377] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.294 [2024-11-20 17:56:15.410403] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.294 [2024-11-20 17:56:15.410438] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.294 [2024-11-20 17:56:15.410455] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:48.294 [2024-11-20 17:56:15.410544] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.294 [2024-11-20 17:56:15.410557] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.294 [2024-11-20 17:56:15.410570] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:48.294 [2024-11-20 17:56:15.410582] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.294 [2024-11-20 17:56:15.410598] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.294 [2024-11-20 17:56:15.410609] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:21:48.294 [2024-11-20 17:56:15.410619] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.294 [2024-11-20 17:56:15.410628] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.294 [2024-11-20 17:56:15.410638] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.294 [2024-11-20 17:56:15.410649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-11-20 17:56:15.410659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.294 [2024-11-20 17:56:15.410669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:21:48.294 [2024-11-20 17:56:15.410680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.410755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.294 [2024-11-20 17:56:15.410786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.294 [2024-11-20 17:56:15.410797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:48.294 [2024-11-20 17:56:15.410807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.294 [2024-11-20 17:56:15.410898] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.294 [2024-11-20 17:56:15.410911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.294 [2024-11-20 17:56:15.410921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.294 [2024-11-20 17:56:15.410932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.410942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.294 [2024-11-20 17:56:15.410951] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.410961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:21:48.294 [2024-11-20 17:56:15.410972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.294 [2024-11-20 17:56:15.410981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:21:48.294 [2024-11-20 17:56:15.410992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.294 [2024-11-20 17:56:15.411001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.294 [2024-11-20 17:56:15.411010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:21:48.294 [2024-11-20 17:56:15.411020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.294 [2024-11-20 17:56:15.411040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.294 [2024-11-20 17:56:15.411050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:21:48.294 [2024-11-20 17:56:15.411060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.294 [2024-11-20 17:56:15.411079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.294 [2024-11-20 17:56:15.411107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.294 [2024-11-20 17:56:15.411135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.294 [2024-11-20 17:56:15.411163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.294 [2024-11-20 17:56:15.411190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.294 [2024-11-20 17:56:15.411217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.294 [2024-11-20 17:56:15.411234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.294 [2024-11-20 17:56:15.411243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:21:48.294 [2024-11-20 17:56:15.411252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.294 [2024-11-20 17:56:15.411261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.294 [2024-11-20 17:56:15.411269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:21:48.294 [2024-11-20 17:56:15.411278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.294 [2024-11-20 17:56:15.411297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:21:48.294 [2024-11-20 17:56:15.411306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411315] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.294 [2024-11-20 17:56:15.411325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.294 [2024-11-20 17:56:15.411335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411348] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.294 [2024-11-20 17:56:15.411359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.294 [2024-11-20 17:56:15.411368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.294 [2024-11-20 17:56:15.411377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.294 [2024-11-20 17:56:15.411387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.294 [2024-11-20 17:56:15.411396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.294 [2024-11-20 17:56:15.411405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.294 [2024-11-20 17:56:15.411416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.294 [2024-11-20 17:56:15.411428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.294 [2024-11-20 17:56:15.411440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:21:48.295 [2024-11-20 17:56:15.411450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:21:48.295 [2024-11-20 17:56:15.411460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:21:48.295 [2024-11-20 17:56:15.411471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:21:48.295 [2024-11-20 17:56:15.411482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:21:48.295 [2024-11-20 17:56:15.411493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:21:48.295 [2024-11-20 17:56:15.411503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:21:48.295 [2024-11-20 17:56:15.411513] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:21:48.295 [2024-11-20 17:56:15.411523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:21:48.295 [2024-11-20 17:56:15.411533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411544] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:21:48.295 [2024-11-20 17:56:15.411584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.295 [2024-11-20 17:56:15.411596] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.295 [2024-11-20 17:56:15.411617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.295 [2024-11-20 17:56:15.411628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.295 [2024-11-20 17:56:15.411638] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.295 [2024-11-20 17:56:15.411648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-11-20 17:56:15.411658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.295 [2024-11-20 17:56:15.411672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.806 ms 00:21:48.295 [2024-11-20 17:56:15.411682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-11-20 17:56:15.452990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-11-20 17:56:15.453023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.295 [2024-11-20 17:56:15.453037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.322 ms 00:21:48.295 [2024-11-20 17:56:15.453049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.295 [2024-11-20 17:56:15.453171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.295 [2024-11-20 17:56:15.453194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.295 [2024-11-20 17:56:15.453205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:48.295 [2024-11-20 17:56:15.453215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.516098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.516250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.554 [2024-11-20 17:56:15.516272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 62.960 ms 00:21:48.554 [2024-11-20 17:56:15.516294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.516407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.516421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.554 [2024-11-20 17:56:15.516433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:48.554 [2024-11-20 17:56:15.516443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.516898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.516913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.554 [2024-11-20 17:56:15.516924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.433 ms 00:21:48.554 [2024-11-20 17:56:15.516945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.517066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.517080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.554 [2024-11-20 17:56:15.517090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:21:48.554 [2024-11-20 17:56:15.517100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.538034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.538069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.554 [2024-11-20 17:56:15.538083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.945 ms 00:21:48.554 [2024-11-20 17:56:15.538093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.558061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:21:48.554 [2024-11-20 17:56:15.558117] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:48.554 [2024-11-20 17:56:15.558134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.558145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:48.554 [2024-11-20 17:56:15.558156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.947 ms 00:21:48.554 [2024-11-20 17:56:15.558166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.588180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.588219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:48.554 [2024-11-20 17:56:15.588245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.978 ms 00:21:48.554 [2024-11-20 17:56:15.588256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.607143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.607184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:48.554 [2024-11-20 17:56:15.607197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.832 ms 00:21:48.554 [2024-11-20 17:56:15.607207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.554 [2024-11-20 17:56:15.625821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.554 [2024-11-20 17:56:15.625964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:48.554 [2024-11-20 17:56:15.625984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.561 ms 00:21:48.555 [2024-11-20 17:56:15.625996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-11-20 17:56:15.626793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.555 [2024-11-20 17:56:15.626814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.555 [2024-11-20 17:56:15.626826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:21:48.555 [2024-11-20 17:56:15.626836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-11-20 17:56:15.714683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.555 [2024-11-20 17:56:15.714745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:48.555 [2024-11-20 17:56:15.714761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.960 ms 00:21:48.555 [2024-11-20 17:56:15.714786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.555 [2024-11-20 17:56:15.725874] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:21:48.814 [2024-11-20 17:56:15.742364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.742412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.814 [2024-11-20 17:56:15.742428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.485 ms 00:21:48.814 [2024-11-20 17:56:15.742439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.742590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.742610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:48.814 [2024-11-20 17:56:15.742622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:21:48.814 [2024-11-20 17:56:15.742633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.742686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.742698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.814 [2024-11-20 17:56:15.742709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:21:48.814 [2024-11-20 17:56:15.742719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.742760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.742796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.814 [2024-11-20 17:56:15.742813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:21:48.814 [2024-11-20 17:56:15.742823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.742868] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:48.814 [2024-11-20 17:56:15.742881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.742892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:48.814 [2024-11-20 17:56:15.742902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:48.814 [2024-11-20 17:56:15.742912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.779195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.779247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.814 [2024-11-20 17:56:15.779261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.319 ms 00:21:48.814 [2024-11-20 17:56:15.779287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.779405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.814 [2024-11-20 17:56:15.779421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.814 [2024-11-20 17:56:15.779433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:48.814 [2024-11-20 17:56:15.779442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.814 [2024-11-20 17:56:15.780406] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.814 [2024-11-20 17:56:15.784616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 410.117 ms, result 0 00:21:48.814 [2024-11-20 17:56:15.785521] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:48.814 [2024-11-20 17:56:15.803699] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:49.751  [2024-11-20T17:56:17.865Z] Copying: 22/256 [MB] (22 MBps) [2024-11-20T17:56:19.244Z] Copying: 45/256 [MB] (22 MBps) [2024-11-20T17:56:19.812Z] Copying: 68/256 [MB] (23 MBps) [2024-11-20T17:56:21.190Z] Copying: 91/256 [MB] (23 MBps) [2024-11-20T17:56:22.125Z] Copying: 115/256 [MB] (23 MBps) [2024-11-20T17:56:23.061Z] Copying: 138/256 [MB] (23 MBps) [2024-11-20T17:56:24.030Z] Copying: 162/256 [MB] (23 MBps) [2024-11-20T17:56:24.967Z] Copying: 185/256 [MB] (23 MBps) [2024-11-20T17:56:25.903Z] Copying: 209/256 [MB] (23 MBps) [2024-11-20T17:56:26.840Z] Copying: 234/256 [MB] (24 MBps) [2024-11-20T17:56:26.840Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-20 17:56:26.662844] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:59.664 [2024-11-20 17:56:26.677343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.677478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:59.664 [2024-11-20 17:56:26.677583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:59.664 [2024-11-20 17:56:26.677622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.677699] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:21:59.664 [2024-11-20 17:56:26.682040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.682188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:59.664 [2024-11-20 17:56:26.682265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.294 ms 00:21:59.664 [2024-11-20 17:56:26.682299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.684215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.684351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:59.664 [2024-11-20 17:56:26.684428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.868 ms 00:21:59.664 [2024-11-20 17:56:26.684463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.691387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.691530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:59.664 [2024-11-20 17:56:26.691561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.909 ms 00:21:59.664 [2024-11-20 17:56:26.691572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.697241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.697275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:59.664 [2024-11-20 17:56:26.697287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.638 ms 00:21:59.664 [2024-11-20 17:56:26.697297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.733585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.733755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:59.664 [2024-11-20 17:56:26.733784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.281 ms 00:21:59.664 [2024-11-20 17:56:26.733795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.755270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.755311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:59.664 [2024-11-20 17:56:26.755339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.401 ms 00:21:59.664 [2024-11-20 17:56:26.755355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.755494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.755508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:59.664 [2024-11-20 17:56:26.755520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:21:59.664 [2024-11-20 17:56:26.755529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.791921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.791963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:59.664 [2024-11-20 17:56:26.791977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.432 ms 00:21:59.664 [2024-11-20 17:56:26.791987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.664 [2024-11-20 17:56:26.828670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.664 [2024-11-20 17:56:26.828712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:59.664 [2024-11-20 17:56:26.828726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.481 ms 00:21:59.664 [2024-11-20 17:56:26.828738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.925 [2024-11-20 17:56:26.865943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.925 [2024-11-20 17:56:26.866050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:59.925 [2024-11-20 17:56:26.866069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.176 ms 00:21:59.925 [2024-11-20 17:56:26.866079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.925 [2024-11-20 17:56:26.902516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.925 [2024-11-20 17:56:26.902561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:59.925 [2024-11-20 17:56:26.902575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.336 ms 00:21:59.925 [2024-11-20 17:56:26.902587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.925 [2024-11-20 17:56:26.902680] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:59.925 [2024-11-20 17:56:26.902705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.902992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:59.925 [2024-11-20 17:56:26.903489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:59.926 [2024-11-20 17:56:26.903825] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:59.926 [2024-11-20 17:56:26.903835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:21:59.926 [2024-11-20 17:56:26.903846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:59.926 [2024-11-20 17:56:26.903855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:59.926 [2024-11-20 17:56:26.903866] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:59.926 [2024-11-20 17:56:26.903876] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:59.926 [2024-11-20 17:56:26.903885] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:59.926 [2024-11-20 17:56:26.903895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:59.926 [2024-11-20 17:56:26.903905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:59.926 [2024-11-20 17:56:26.903914] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:59.926 [2024-11-20 17:56:26.903923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:59.926 [2024-11-20 17:56:26.903933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.926 [2024-11-20 17:56:26.903943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:59.926 [2024-11-20 17:56:26.903957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:21:59.926 [2024-11-20 17:56:26.903967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.923300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.926 [2024-11-20 17:56:26.923338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:59.926 [2024-11-20 17:56:26.923351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.341 ms 00:21:59.926 [2024-11-20 17:56:26.923363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.923937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:59.926 [2024-11-20 17:56:26.923956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:59.926 [2024-11-20 17:56:26.923967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:21:59.926 [2024-11-20 17:56:26.923978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.978336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.926 [2024-11-20 17:56:26.978382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:59.926 [2024-11-20 17:56:26.978396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.926 [2024-11-20 17:56:26.978406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.978535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.926 [2024-11-20 17:56:26.978554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:59.926 [2024-11-20 17:56:26.978565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.926 [2024-11-20 17:56:26.978575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.978630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.926 [2024-11-20 17:56:26.978644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:59.926 [2024-11-20 17:56:26.978654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.926 [2024-11-20 17:56:26.978664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:59.926 [2024-11-20 17:56:26.978683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:59.926 [2024-11-20 17:56:26.978694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:59.926 [2024-11-20 17:56:26.978712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:59.926 [2024-11-20 17:56:26.978722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.102514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.102572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:00.186 [2024-11-20 17:56:27.102587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.102599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.203945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:00.186 [2024-11-20 17:56:27.204248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:00.186 [2024-11-20 17:56:27.204382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:00.186 [2024-11-20 17:56:27.204443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:00.186 [2024-11-20 17:56:27.204622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:00.186 [2024-11-20 17:56:27.204693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:00.186 [2024-11-20 17:56:27.204787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.204842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:00.186 [2024-11-20 17:56:27.204855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:00.186 [2024-11-20 17:56:27.204865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:00.186 [2024-11-20 17:56:27.204884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:00.186 [2024-11-20 17:56:27.205024] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 528.529 ms, result 0 00:22:01.565 00:22:01.565 00:22:01.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:01.565 17:56:28 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78506 00:22:01.565 17:56:28 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:01.565 17:56:28 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78506 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78506 ']' 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:01.565 17:56:28 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:01.565 [2024-11-20 17:56:28.534261] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:01.565 [2024-11-20 17:56:28.534811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78506 ] 00:22:01.565 [2024-11-20 17:56:28.716600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.824 [2024-11-20 17:56:28.829598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.762 17:56:29 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:02.762 17:56:29 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:02.762 17:56:29 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:02.762 [2024-11-20 17:56:29.920669] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:02.762 [2024-11-20 17:56:29.920733] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:03.031 [2024-11-20 17:56:30.075404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.075456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:03.031 [2024-11-20 17:56:30.075475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:03.031 [2024-11-20 17:56:30.075487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.078560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.078601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:03.031 [2024-11-20 17:56:30.078616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.056 ms 00:22:03.031 [2024-11-20 17:56:30.078626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.078744] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:03.031 [2024-11-20 17:56:30.079678] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:03.031 [2024-11-20 17:56:30.079714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.079725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:03.031 [2024-11-20 17:56:30.079738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:22:03.031 [2024-11-20 17:56:30.079747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.081208] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:03.031 [2024-11-20 17:56:30.100691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.100735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:03.031 [2024-11-20 17:56:30.100750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.518 ms 00:22:03.031 [2024-11-20 17:56:30.100763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.100882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.100899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:03.031 [2024-11-20 17:56:30.100910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:03.031 [2024-11-20 17:56:30.100923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.107641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.107811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:03.031 [2024-11-20 17:56:30.107831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.680 ms 00:22:03.031 [2024-11-20 17:56:30.107845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.107961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.107977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:03.031 [2024-11-20 17:56:30.107988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:03.031 [2024-11-20 17:56:30.108001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.108034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.108048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:03.031 [2024-11-20 17:56:30.108058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:03.031 [2024-11-20 17:56:30.108070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.108095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:03.031 [2024-11-20 17:56:30.112836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.112880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:03.031 [2024-11-20 17:56:30.112896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.750 ms 00:22:03.031 [2024-11-20 17:56:30.112906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.112980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.112993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:03.031 [2024-11-20 17:56:30.113006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:03.031 [2024-11-20 17:56:30.113019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.113044] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:03.031 [2024-11-20 17:56:30.113065] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:03.031 [2024-11-20 17:56:30.113110] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:03.031 [2024-11-20 17:56:30.113129] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:03.031 [2024-11-20 17:56:30.113220] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:03.031 [2024-11-20 17:56:30.113234] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:03.031 [2024-11-20 17:56:30.113255] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:03.031 [2024-11-20 17:56:30.113269] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:03.031 [2024-11-20 17:56:30.113308] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:03.031 [2024-11-20 17:56:30.113318] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:03.031 [2024-11-20 17:56:30.113333] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:03.031 [2024-11-20 17:56:30.113343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.113356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:03.031 [2024-11-20 17:56:30.113367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:22:03.031 [2024-11-20 17:56:30.113380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.113457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.031 [2024-11-20 17:56:30.113471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:03.031 [2024-11-20 17:56:30.113481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:03.031 [2024-11-20 17:56:30.113493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.031 [2024-11-20 17:56:30.113581] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:03.031 [2024-11-20 17:56:30.113596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:03.031 [2024-11-20 17:56:30.113607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:03.031 [2024-11-20 17:56:30.113650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:03.031 [2024-11-20 17:56:30.113688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.031 [2024-11-20 17:56:30.113708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:03.031 [2024-11-20 17:56:30.113720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:03.031 [2024-11-20 17:56:30.113730] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:03.031 [2024-11-20 17:56:30.113742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:03.031 [2024-11-20 17:56:30.113751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:03.031 [2024-11-20 17:56:30.113763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:03.031 [2024-11-20 17:56:30.113807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:03.031 [2024-11-20 17:56:30.113848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:03.031 [2024-11-20 17:56:30.113883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113904] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:03.031 [2024-11-20 17:56:30.113913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:03.031 [2024-11-20 17:56:30.113945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:03.031 [2024-11-20 17:56:30.113966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:03.031 [2024-11-20 17:56:30.113975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:03.031 [2024-11-20 17:56:30.113988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.031 [2024-11-20 17:56:30.113998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:03.031 [2024-11-20 17:56:30.114016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:03.031 [2024-11-20 17:56:30.114025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:03.031 [2024-11-20 17:56:30.114037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:03.031 [2024-11-20 17:56:30.114046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:03.031 [2024-11-20 17:56:30.114059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.114074] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:03.031 [2024-11-20 17:56:30.114086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:03.031 [2024-11-20 17:56:30.114095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.114106] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:03.031 [2024-11-20 17:56:30.114123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:03.031 [2024-11-20 17:56:30.114135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:03.031 [2024-11-20 17:56:30.114145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:03.031 [2024-11-20 17:56:30.114157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:03.031 [2024-11-20 17:56:30.114167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:03.031 [2024-11-20 17:56:30.114178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:03.031 [2024-11-20 17:56:30.114188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:03.031 [2024-11-20 17:56:30.114199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:03.031 [2024-11-20 17:56:30.114209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:03.031 [2024-11-20 17:56:30.114221] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:03.031 [2024-11-20 17:56:30.114234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.031 [2024-11-20 17:56:30.114250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:03.031 [2024-11-20 17:56:30.114261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:03.031 [2024-11-20 17:56:30.114275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:03.031 [2024-11-20 17:56:30.114286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:03.031 [2024-11-20 17:56:30.114298] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:03.031 [2024-11-20 17:56:30.114309] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:03.031 [2024-11-20 17:56:30.114321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:03.032 [2024-11-20 17:56:30.114331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:03.032 [2024-11-20 17:56:30.114343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:03.032 [2024-11-20 17:56:30.114353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114398] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:03.032 [2024-11-20 17:56:30.114410] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:03.032 [2024-11-20 17:56:30.114422] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114437] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:03.032 [2024-11-20 17:56:30.114447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:03.032 [2024-11-20 17:56:30.114460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:03.032 [2024-11-20 17:56:30.114470] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:03.032 [2024-11-20 17:56:30.114484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.032 [2024-11-20 17:56:30.114496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:03.032 [2024-11-20 17:56:30.114509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:22:03.032 [2024-11-20 17:56:30.114518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.032 [2024-11-20 17:56:30.154850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.032 [2024-11-20 17:56:30.154890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:03.032 [2024-11-20 17:56:30.154907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.330 ms 00:22:03.032 [2024-11-20 17:56:30.154922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.032 [2024-11-20 17:56:30.155065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.032 [2024-11-20 17:56:30.155078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:03.032 [2024-11-20 17:56:30.155092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:03.032 [2024-11-20 17:56:30.155102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.032 [2024-11-20 17:56:30.201760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.032 [2024-11-20 17:56:30.201962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:03.292 [2024-11-20 17:56:30.202127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.705 ms 00:22:03.292 [2024-11-20 17:56:30.202166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.202304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.202412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:03.292 [2024-11-20 17:56:30.202480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:03.292 [2024-11-20 17:56:30.202512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.202986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.203100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:03.292 [2024-11-20 17:56:30.203185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:22:03.292 [2024-11-20 17:56:30.203222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.203368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.203410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:03.292 [2024-11-20 17:56:30.203485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:22:03.292 [2024-11-20 17:56:30.203520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.222982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.223130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:03.292 [2024-11-20 17:56:30.223257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.443 ms 00:22:03.292 [2024-11-20 17:56:30.223295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.258745] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:03.292 [2024-11-20 17:56:30.258959] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:03.292 [2024-11-20 17:56:30.259057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.259091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:03.292 [2024-11-20 17:56:30.259124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.670 ms 00:22:03.292 [2024-11-20 17:56:30.259153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.289355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.289500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:03.292 [2024-11-20 17:56:30.289617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.148 ms 00:22:03.292 [2024-11-20 17:56:30.289667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.308464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.308610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:03.292 [2024-11-20 17:56:30.308687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.701 ms 00:22:03.292 [2024-11-20 17:56:30.308722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.327266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.327412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:03.292 [2024-11-20 17:56:30.327542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.450 ms 00:22:03.292 [2024-11-20 17:56:30.327581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.328427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.328553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:03.292 [2024-11-20 17:56:30.328634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.693 ms 00:22:03.292 [2024-11-20 17:56:30.328668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.415564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.415793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:03.292 [2024-11-20 17:56:30.415929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.978 ms 00:22:03.292 [2024-11-20 17:56:30.415967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.426560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:03.292 [2024-11-20 17:56:30.442800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.443004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:03.292 [2024-11-20 17:56:30.443151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.735 ms 00:22:03.292 [2024-11-20 17:56:30.443192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.443322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.443362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:03.292 [2024-11-20 17:56:30.443454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:03.292 [2024-11-20 17:56:30.443493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.443573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.443610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:03.292 [2024-11-20 17:56:30.443693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:22:03.292 [2024-11-20 17:56:30.443731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.443804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.443846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:03.292 [2024-11-20 17:56:30.443877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:03.292 [2024-11-20 17:56:30.444014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.292 [2024-11-20 17:56:30.444072] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:03.292 [2024-11-20 17:56:30.444111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.292 [2024-11-20 17:56:30.444187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:03.292 [2024-11-20 17:56:30.444226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:03.292 [2024-11-20 17:56:30.444260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.552 [2024-11-20 17:56:30.481093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.552 [2024-11-20 17:56:30.481235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:03.552 [2024-11-20 17:56:30.481321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.763 ms 00:22:03.552 [2024-11-20 17:56:30.481356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.552 [2024-11-20 17:56:30.481534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.552 [2024-11-20 17:56:30.481579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:03.552 [2024-11-20 17:56:30.481681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:03.552 [2024-11-20 17:56:30.481717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.552 [2024-11-20 17:56:30.482690] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:03.552 [2024-11-20 17:56:30.486914] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 407.671 ms, result 0 00:22:03.552 [2024-11-20 17:56:30.488410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:03.552 Some configs were skipped because the RPC state that can call them passed over. 00:22:03.552 17:56:30 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:03.811 [2024-11-20 17:56:30.740284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.811 [2024-11-20 17:56:30.740351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:03.811 [2024-11-20 17:56:30.740368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.718 ms 00:22:03.811 [2024-11-20 17:56:30.740382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.811 [2024-11-20 17:56:30.740422] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.865 ms, result 0 00:22:03.811 true 00:22:03.811 17:56:30 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:03.811 [2024-11-20 17:56:30.963479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:03.811 [2024-11-20 17:56:30.963680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:03.811 [2024-11-20 17:56:30.963714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.062 ms 00:22:03.811 [2024-11-20 17:56:30.963726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:03.811 [2024-11-20 17:56:30.963807] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.379 ms, result 0 00:22:03.811 true 00:22:04.070 17:56:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78506 00:22:04.070 17:56:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78506 ']' 00:22:04.070 17:56:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78506 00:22:04.070 17:56:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:04.070 17:56:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:04.070 17:56:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78506 00:22:04.070 killing process with pid 78506 00:22:04.070 17:56:31 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:04.070 17:56:31 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:04.070 17:56:31 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78506' 00:22:04.070 17:56:31 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78506 00:22:04.070 17:56:31 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78506 00:22:05.008 [2024-11-20 17:56:32.152207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.152266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:05.008 [2024-11-20 17:56:32.152282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:05.008 [2024-11-20 17:56:32.152297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-11-20 17:56:32.152321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:05.008 [2024-11-20 17:56:32.156548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.156580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:05.008 [2024-11-20 17:56:32.156598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.213 ms 00:22:05.008 [2024-11-20 17:56:32.156608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-11-20 17:56:32.156892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.156908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:05.008 [2024-11-20 17:56:32.156921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.218 ms 00:22:05.008 [2024-11-20 17:56:32.156931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-11-20 17:56:32.160248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.160286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:05.008 [2024-11-20 17:56:32.160301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.298 ms 00:22:05.008 [2024-11-20 17:56:32.160311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-11-20 17:56:32.165985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.166022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:05.008 [2024-11-20 17:56:32.166036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.643 ms 00:22:05.008 [2024-11-20 17:56:32.166046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.008 [2024-11-20 17:56:32.181380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.008 [2024-11-20 17:56:32.181413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:05.008 [2024-11-20 17:56:32.181432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.297 ms 00:22:05.008 [2024-11-20 17:56:32.181453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-11-20 17:56:32.192478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-11-20 17:56:32.192518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:05.268 [2024-11-20 17:56:32.192535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.970 ms 00:22:05.268 [2024-11-20 17:56:32.192546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-11-20 17:56:32.192689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-11-20 17:56:32.192702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:05.268 [2024-11-20 17:56:32.192715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:22:05.268 [2024-11-20 17:56:32.192725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-11-20 17:56:32.208219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-11-20 17:56:32.208255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:05.268 [2024-11-20 17:56:32.208271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.495 ms 00:22:05.268 [2024-11-20 17:56:32.208280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-11-20 17:56:32.223853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.268 [2024-11-20 17:56:32.223888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:05.268 [2024-11-20 17:56:32.223907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.542 ms 00:22:05.268 [2024-11-20 17:56:32.223917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.268 [2024-11-20 17:56:32.238474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.269 [2024-11-20 17:56:32.238506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:05.269 [2024-11-20 17:56:32.238525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.527 ms 00:22:05.269 [2024-11-20 17:56:32.238534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-11-20 17:56:32.253853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.269 [2024-11-20 17:56:32.254004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:05.269 [2024-11-20 17:56:32.254029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.266 ms 00:22:05.269 [2024-11-20 17:56:32.254038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.269 [2024-11-20 17:56:32.254112] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:05.269 [2024-11-20 17:56:32.254131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.254991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:05.269 [2024-11-20 17:56:32.255119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:05.270 [2024-11-20 17:56:32.255360] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:05.270 [2024-11-20 17:56:32.255375] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:22:05.270 [2024-11-20 17:56:32.255400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:05.270 [2024-11-20 17:56:32.255413] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:05.270 [2024-11-20 17:56:32.255423] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:05.270 [2024-11-20 17:56:32.255436] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:05.270 [2024-11-20 17:56:32.255445] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:05.270 [2024-11-20 17:56:32.255457] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:05.270 [2024-11-20 17:56:32.255467] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:05.270 [2024-11-20 17:56:32.255478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:05.270 [2024-11-20 17:56:32.255487] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:05.270 [2024-11-20 17:56:32.255500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-11-20 17:56:32.255510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:05.270 [2024-11-20 17:56:32.255523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.392 ms 00:22:05.270 [2024-11-20 17:56:32.255536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.275613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-11-20 17:56:32.275647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:05.270 [2024-11-20 17:56:32.275666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.083 ms 00:22:05.270 [2024-11-20 17:56:32.275676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.276292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:05.270 [2024-11-20 17:56:32.276311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:05.270 [2024-11-20 17:56:32.276329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:22:05.270 [2024-11-20 17:56:32.276338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.345358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.270 [2024-11-20 17:56:32.345546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:05.270 [2024-11-20 17:56:32.345574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.270 [2024-11-20 17:56:32.345585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.345704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.270 [2024-11-20 17:56:32.345717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:05.270 [2024-11-20 17:56:32.345734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.270 [2024-11-20 17:56:32.345744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.345827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.270 [2024-11-20 17:56:32.345841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:05.270 [2024-11-20 17:56:32.345858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.270 [2024-11-20 17:56:32.345868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.270 [2024-11-20 17:56:32.345891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.270 [2024-11-20 17:56:32.345901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:05.270 [2024-11-20 17:56:32.345914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.270 [2024-11-20 17:56:32.345927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.473960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.474025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:05.530 [2024-11-20 17:56:32.474045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.474056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.575501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.575564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:05.530 [2024-11-20 17:56:32.575586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.575598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.575718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.575731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:05.530 [2024-11-20 17:56:32.575748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.575758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.575813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.575825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:05.530 [2024-11-20 17:56:32.575838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.575848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.575999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.576013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:05.530 [2024-11-20 17:56:32.576026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.576036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.576079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.576091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:05.530 [2024-11-20 17:56:32.576104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.576114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.576158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.576169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:05.530 [2024-11-20 17:56:32.576185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.576195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.576241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:05.530 [2024-11-20 17:56:32.576252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:05.530 [2024-11-20 17:56:32.576265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:05.530 [2024-11-20 17:56:32.576275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:05.530 [2024-11-20 17:56:32.576418] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 424.876 ms, result 0 00:22:06.468 17:56:33 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:06.468 17:56:33 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:06.728 [2024-11-20 17:56:33.686255] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:06.728 [2024-11-20 17:56:33.686380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78577 ] 00:22:06.728 [2024-11-20 17:56:33.869222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.987 [2024-11-20 17:56:33.985037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.246 [2024-11-20 17:56:34.381304] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.246 [2024-11-20 17:56:34.381376] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:07.506 [2024-11-20 17:56:34.544810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.506 [2024-11-20 17:56:34.545001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:07.506 [2024-11-20 17:56:34.545026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:07.506 [2024-11-20 17:56:34.545038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.506 [2024-11-20 17:56:34.548178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.506 [2024-11-20 17:56:34.548319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:07.506 [2024-11-20 17:56:34.548341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.116 ms 00:22:07.506 [2024-11-20 17:56:34.548353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.506 [2024-11-20 17:56:34.548509] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:07.506 [2024-11-20 17:56:34.549454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:07.506 [2024-11-20 17:56:34.549487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.506 [2024-11-20 17:56:34.549499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:07.506 [2024-11-20 17:56:34.549511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.989 ms 00:22:07.506 [2024-11-20 17:56:34.549521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.506 [2024-11-20 17:56:34.551061] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:07.507 [2024-11-20 17:56:34.570825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.570870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:07.507 [2024-11-20 17:56:34.570885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.797 ms 00:22:07.507 [2024-11-20 17:56:34.570896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.570997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.571012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:07.507 [2024-11-20 17:56:34.571024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:07.507 [2024-11-20 17:56:34.571034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.577804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.577950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:07.507 [2024-11-20 17:56:34.577971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.737 ms 00:22:07.507 [2024-11-20 17:56:34.577982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.578092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.578106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:07.507 [2024-11-20 17:56:34.578118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:07.507 [2024-11-20 17:56:34.578128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.578157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.578172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:07.507 [2024-11-20 17:56:34.578183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:07.507 [2024-11-20 17:56:34.578193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.578216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:07.507 [2024-11-20 17:56:34.583092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.583124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:07.507 [2024-11-20 17:56:34.583137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.889 ms 00:22:07.507 [2024-11-20 17:56:34.583148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.583217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.583230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:07.507 [2024-11-20 17:56:34.583241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:07.507 [2024-11-20 17:56:34.583252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.583276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:07.507 [2024-11-20 17:56:34.583302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:07.507 [2024-11-20 17:56:34.583338] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:07.507 [2024-11-20 17:56:34.583356] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:07.507 [2024-11-20 17:56:34.583453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:07.507 [2024-11-20 17:56:34.583467] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:07.507 [2024-11-20 17:56:34.583480] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:07.507 [2024-11-20 17:56:34.583493] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:07.507 [2024-11-20 17:56:34.583509] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:07.507 [2024-11-20 17:56:34.583526] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:07.507 [2024-11-20 17:56:34.583537] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:07.507 [2024-11-20 17:56:34.583547] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:07.507 [2024-11-20 17:56:34.583556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:07.507 [2024-11-20 17:56:34.583567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.583577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:07.507 [2024-11-20 17:56:34.583588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:22:07.507 [2024-11-20 17:56:34.583598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.583675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.507 [2024-11-20 17:56:34.583689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:07.507 [2024-11-20 17:56:34.583700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:07.507 [2024-11-20 17:56:34.583710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.507 [2024-11-20 17:56:34.583822] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:07.507 [2024-11-20 17:56:34.583837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:07.507 [2024-11-20 17:56:34.583848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.507 [2024-11-20 17:56:34.583858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.583869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:07.507 [2024-11-20 17:56:34.583878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.583888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:07.507 [2024-11-20 17:56:34.583899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:07.507 [2024-11-20 17:56:34.583908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:07.507 [2024-11-20 17:56:34.583918] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.507 [2024-11-20 17:56:34.583928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:07.507 [2024-11-20 17:56:34.583938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:07.507 [2024-11-20 17:56:34.583947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:07.507 [2024-11-20 17:56:34.583968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:07.507 [2024-11-20 17:56:34.583978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:07.507 [2024-11-20 17:56:34.583987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.583996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:07.507 [2024-11-20 17:56:34.584006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:07.507 [2024-11-20 17:56:34.584034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:07.507 [2024-11-20 17:56:34.584063] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:07.507 [2024-11-20 17:56:34.584090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:07.507 [2024-11-20 17:56:34.584118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:07.507 [2024-11-20 17:56:34.584145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.507 [2024-11-20 17:56:34.584163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:07.507 [2024-11-20 17:56:34.584172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:07.507 [2024-11-20 17:56:34.584180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:07.507 [2024-11-20 17:56:34.584189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:07.507 [2024-11-20 17:56:34.584198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:07.507 [2024-11-20 17:56:34.584207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:07.507 [2024-11-20 17:56:34.584224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:07.507 [2024-11-20 17:56:34.584235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584244] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:07.507 [2024-11-20 17:56:34.584254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:07.507 [2024-11-20 17:56:34.584264] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:07.507 [2024-11-20 17:56:34.584287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:07.507 [2024-11-20 17:56:34.584297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:07.507 [2024-11-20 17:56:34.584306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:07.507 [2024-11-20 17:56:34.584315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:07.507 [2024-11-20 17:56:34.584325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:07.507 [2024-11-20 17:56:34.584334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:07.507 [2024-11-20 17:56:34.584345] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:07.508 [2024-11-20 17:56:34.584358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:07.508 [2024-11-20 17:56:34.584379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:07.508 [2024-11-20 17:56:34.584389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:07.508 [2024-11-20 17:56:34.584400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:07.508 [2024-11-20 17:56:34.584410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:07.508 [2024-11-20 17:56:34.584420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:07.508 [2024-11-20 17:56:34.584431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:07.508 [2024-11-20 17:56:34.584441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:07.508 [2024-11-20 17:56:34.584451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:07.508 [2024-11-20 17:56:34.584461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:07.508 [2024-11-20 17:56:34.584511] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:07.508 [2024-11-20 17:56:34.584522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584533] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:07.508 [2024-11-20 17:56:34.584543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:07.508 [2024-11-20 17:56:34.584553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:07.508 [2024-11-20 17:56:34.584565] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:07.508 [2024-11-20 17:56:34.584576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.508 [2024-11-20 17:56:34.584586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:07.508 [2024-11-20 17:56:34.584606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:22:07.508 [2024-11-20 17:56:34.584624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.508 [2024-11-20 17:56:34.626944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.508 [2024-11-20 17:56:34.626985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:07.508 [2024-11-20 17:56:34.627000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.331 ms 00:22:07.508 [2024-11-20 17:56:34.627011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.508 [2024-11-20 17:56:34.627157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.508 [2024-11-20 17:56:34.627176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:07.508 [2024-11-20 17:56:34.627187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:22:07.508 [2024-11-20 17:56:34.627197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.682131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.682173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:07.768 [2024-11-20 17:56:34.682187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.998 ms 00:22:07.768 [2024-11-20 17:56:34.682202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.682315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.682328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:07.768 [2024-11-20 17:56:34.682339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:07.768 [2024-11-20 17:56:34.682349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.682807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.682822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:07.768 [2024-11-20 17:56:34.682833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:22:07.768 [2024-11-20 17:56:34.682850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.682982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.682997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:07.768 [2024-11-20 17:56:34.683007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:22:07.768 [2024-11-20 17:56:34.683018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.700702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.700741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:07.768 [2024-11-20 17:56:34.700754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.689 ms 00:22:07.768 [2024-11-20 17:56:34.700787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.719384] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:07.768 [2024-11-20 17:56:34.719425] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:07.768 [2024-11-20 17:56:34.719441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.719452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:07.768 [2024-11-20 17:56:34.719464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.565 ms 00:22:07.768 [2024-11-20 17:56:34.719474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.749677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.749729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:07.768 [2024-11-20 17:56:34.749743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.168 ms 00:22:07.768 [2024-11-20 17:56:34.749755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.768205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.768245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:07.768 [2024-11-20 17:56:34.768258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.381 ms 00:22:07.768 [2024-11-20 17:56:34.768269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.786649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.786688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:07.768 [2024-11-20 17:56:34.786702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.330 ms 00:22:07.768 [2024-11-20 17:56:34.786711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.787501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.787534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:07.768 [2024-11-20 17:56:34.787547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:22:07.768 [2024-11-20 17:56:34.787557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.872122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.872190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:07.768 [2024-11-20 17:56:34.872208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.674 ms 00:22:07.768 [2024-11-20 17:56:34.872219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.883471] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:07.768 [2024-11-20 17:56:34.899964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.900020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:07.768 [2024-11-20 17:56:34.900037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.662 ms 00:22:07.768 [2024-11-20 17:56:34.900054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.900195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.900210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:07.768 [2024-11-20 17:56:34.900222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:07.768 [2024-11-20 17:56:34.900232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.900289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.900300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:07.768 [2024-11-20 17:56:34.900311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:22:07.768 [2024-11-20 17:56:34.900322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.900361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.900375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:07.768 [2024-11-20 17:56:34.900385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:22:07.768 [2024-11-20 17:56:34.900395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.900433] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:07.768 [2024-11-20 17:56:34.900445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.900455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:07.768 [2024-11-20 17:56:34.900465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:07.768 [2024-11-20 17:56:34.900475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.936832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.936876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:07.768 [2024-11-20 17:56:34.936891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.391 ms 00:22:07.768 [2024-11-20 17:56:34.936902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.937021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:07.768 [2024-11-20 17:56:34.937036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:07.768 [2024-11-20 17:56:34.937048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:07.768 [2024-11-20 17:56:34.937058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:07.768 [2024-11-20 17:56:34.938013] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.028 [2024-11-20 17:56:34.942324] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.505 ms, result 0 00:22:08.028 [2024-11-20 17:56:34.943236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:08.028 [2024-11-20 17:56:34.961838] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:08.966  [2024-11-20T17:56:37.079Z] Copying: 28/256 [MB] (28 MBps) [2024-11-20T17:56:38.016Z] Copying: 53/256 [MB] (24 MBps) [2024-11-20T17:56:39.392Z] Copying: 78/256 [MB] (25 MBps) [2024-11-20T17:56:39.960Z] Copying: 102/256 [MB] (24 MBps) [2024-11-20T17:56:41.340Z] Copying: 127/256 [MB] (24 MBps) [2024-11-20T17:56:42.277Z] Copying: 152/256 [MB] (24 MBps) [2024-11-20T17:56:43.215Z] Copying: 177/256 [MB] (24 MBps) [2024-11-20T17:56:44.151Z] Copying: 202/256 [MB] (24 MBps) [2024-11-20T17:56:45.087Z] Copying: 226/256 [MB] (24 MBps) [2024-11-20T17:56:45.489Z] Copying: 251/256 [MB] (24 MBps) [2024-11-20T17:56:45.489Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 17:56:45.143057] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:18.313 [2024-11-20 17:56:45.157821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.157861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:18.313 [2024-11-20 17:56:45.157877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:18.313 [2024-11-20 17:56:45.157894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.157932] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:18.313 [2024-11-20 17:56:45.162249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.162276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:18.313 [2024-11-20 17:56:45.162288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.307 ms 00:22:18.313 [2024-11-20 17:56:45.162298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.162529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.162542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:18.313 [2024-11-20 17:56:45.162553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:22:18.313 [2024-11-20 17:56:45.162563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.165567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.165710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:18.313 [2024-11-20 17:56:45.165852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.990 ms 00:22:18.313 [2024-11-20 17:56:45.165891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.171554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.171701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:18.313 [2024-11-20 17:56:45.171858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:22:18.313 [2024-11-20 17:56:45.171898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.208080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.208228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:18.313 [2024-11-20 17:56:45.208353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.154 ms 00:22:18.313 [2024-11-20 17:56:45.208392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.229418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.229569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:18.313 [2024-11-20 17:56:45.229721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.983 ms 00:22:18.313 [2024-11-20 17:56:45.229759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.229982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.230072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:18.313 [2024-11-20 17:56:45.230162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:18.313 [2024-11-20 17:56:45.230198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.265874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.266013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:18.313 [2024-11-20 17:56:45.266083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.676 ms 00:22:18.313 [2024-11-20 17:56:45.266118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.302299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.302438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:18.313 [2024-11-20 17:56:45.302459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.145 ms 00:22:18.313 [2024-11-20 17:56:45.302470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.338032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.338070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:18.313 [2024-11-20 17:56:45.338084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.545 ms 00:22:18.313 [2024-11-20 17:56:45.338094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.373546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.313 [2024-11-20 17:56:45.373585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:18.313 [2024-11-20 17:56:45.373598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.426 ms 00:22:18.313 [2024-11-20 17:56:45.373608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.313 [2024-11-20 17:56:45.373670] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:18.313 [2024-11-20 17:56:45.373688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:18.313 [2024-11-20 17:56:45.373701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:18.313 [2024-11-20 17:56:45.373713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.373998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:18.314 [2024-11-20 17:56:45.374655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:18.315 [2024-11-20 17:56:45.374792] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:18.315 [2024-11-20 17:56:45.374802] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:22:18.315 [2024-11-20 17:56:45.374813] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:18.315 [2024-11-20 17:56:45.374824] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:18.315 [2024-11-20 17:56:45.374833] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:18.315 [2024-11-20 17:56:45.374844] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:18.315 [2024-11-20 17:56:45.374853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:18.315 [2024-11-20 17:56:45.374863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:18.315 [2024-11-20 17:56:45.374874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:18.315 [2024-11-20 17:56:45.374883] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:18.315 [2024-11-20 17:56:45.374892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:18.315 [2024-11-20 17:56:45.374902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.315 [2024-11-20 17:56:45.374916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:18.315 [2024-11-20 17:56:45.374927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.235 ms 00:22:18.315 [2024-11-20 17:56:45.374937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.394789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.315 [2024-11-20 17:56:45.394822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:18.315 [2024-11-20 17:56:45.394835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.863 ms 00:22:18.315 [2024-11-20 17:56:45.394845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.395417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.315 [2024-11-20 17:56:45.395439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:18.315 [2024-11-20 17:56:45.395451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.529 ms 00:22:18.315 [2024-11-20 17:56:45.395461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.450019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.315 [2024-11-20 17:56:45.450070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.315 [2024-11-20 17:56:45.450100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.315 [2024-11-20 17:56:45.450111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.450222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.315 [2024-11-20 17:56:45.450235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.315 [2024-11-20 17:56:45.450246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.315 [2024-11-20 17:56:45.450256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.450309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.315 [2024-11-20 17:56:45.450323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.315 [2024-11-20 17:56:45.450333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.315 [2024-11-20 17:56:45.450343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.315 [2024-11-20 17:56:45.450362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.315 [2024-11-20 17:56:45.450378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.315 [2024-11-20 17:56:45.450388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.315 [2024-11-20 17:56:45.450398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.575466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.575527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.574 [2024-11-20 17:56:45.575551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.575562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.574 [2024-11-20 17:56:45.673340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.574 [2024-11-20 17:56:45.673454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.574 [2024-11-20 17:56:45.673521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.574 [2024-11-20 17:56:45.673665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:18.574 [2024-11-20 17:56:45.673735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.574 [2024-11-20 17:56:45.673838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.673891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:18.574 [2024-11-20 17:56:45.673904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.574 [2024-11-20 17:56:45.673919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:18.574 [2024-11-20 17:56:45.673929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.574 [2024-11-20 17:56:45.674070] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 517.086 ms, result 0 00:22:19.951 00:22:19.951 00:22:19.951 17:56:46 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:22:19.951 17:56:46 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:20.210 17:56:47 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:20.210 [2024-11-20 17:56:47.301875] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:20.210 [2024-11-20 17:56:47.302010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78727 ] 00:22:20.468 [2024-11-20 17:56:47.484247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.468 [2024-11-20 17:56:47.597300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:21.037 [2024-11-20 17:56:47.971238] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.037 [2024-11-20 17:56:47.971313] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:21.037 [2024-11-20 17:56:48.132789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.132843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:21.037 [2024-11-20 17:56:48.132859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:21.037 [2024-11-20 17:56:48.132870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.135920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.135962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:21.037 [2024-11-20 17:56:48.135975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.033 ms 00:22:21.037 [2024-11-20 17:56:48.135986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.136083] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:21.037 [2024-11-20 17:56:48.137070] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:21.037 [2024-11-20 17:56:48.137106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.137117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:21.037 [2024-11-20 17:56:48.137128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:22:21.037 [2024-11-20 17:56:48.137139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.138623] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:21.037 [2024-11-20 17:56:48.158081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.158125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:21.037 [2024-11-20 17:56:48.158140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:22:21.037 [2024-11-20 17:56:48.158151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.158247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.158262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:21.037 [2024-11-20 17:56:48.158274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:21.037 [2024-11-20 17:56:48.158284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.164996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.165197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:21.037 [2024-11-20 17:56:48.165218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.682 ms 00:22:21.037 [2024-11-20 17:56:48.165228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.165334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.165349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:21.037 [2024-11-20 17:56:48.165360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:22:21.037 [2024-11-20 17:56:48.165370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.165399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.165414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:21.037 [2024-11-20 17:56:48.165425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:21.037 [2024-11-20 17:56:48.165435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.165457] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:21.037 [2024-11-20 17:56:48.170260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.170295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:21.037 [2024-11-20 17:56:48.170308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.816 ms 00:22:21.037 [2024-11-20 17:56:48.170318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.170384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.170397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:21.037 [2024-11-20 17:56:48.170408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:21.037 [2024-11-20 17:56:48.170418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.037 [2024-11-20 17:56:48.170438] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:21.037 [2024-11-20 17:56:48.170464] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:21.037 [2024-11-20 17:56:48.170499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:21.037 [2024-11-20 17:56:48.170517] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:21.037 [2024-11-20 17:56:48.170606] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:21.037 [2024-11-20 17:56:48.170619] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:21.037 [2024-11-20 17:56:48.170632] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:21.037 [2024-11-20 17:56:48.170645] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:21.037 [2024-11-20 17:56:48.170661] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:21.037 [2024-11-20 17:56:48.170673] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:21.037 [2024-11-20 17:56:48.170684] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:21.037 [2024-11-20 17:56:48.170693] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:21.037 [2024-11-20 17:56:48.170703] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:21.037 [2024-11-20 17:56:48.170713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.037 [2024-11-20 17:56:48.170724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:21.037 [2024-11-20 17:56:48.170734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:21.037 [2024-11-20 17:56:48.170744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.038 [2024-11-20 17:56:48.170840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.038 [2024-11-20 17:56:48.170856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:21.038 [2024-11-20 17:56:48.170867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:22:21.038 [2024-11-20 17:56:48.170877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.038 [2024-11-20 17:56:48.170967] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:21.038 [2024-11-20 17:56:48.170980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:21.038 [2024-11-20 17:56:48.170992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:21.038 [2024-11-20 17:56:48.171023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171033] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:21.038 [2024-11-20 17:56:48.171053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.038 [2024-11-20 17:56:48.171075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:21.038 [2024-11-20 17:56:48.171084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:21.038 [2024-11-20 17:56:48.171094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:21.038 [2024-11-20 17:56:48.171114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:21.038 [2024-11-20 17:56:48.171124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:21.038 [2024-11-20 17:56:48.171134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:21.038 [2024-11-20 17:56:48.171153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:21.038 [2024-11-20 17:56:48.171181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:21.038 [2024-11-20 17:56:48.171209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:21.038 [2024-11-20 17:56:48.171237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:21.038 [2024-11-20 17:56:48.171266] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:21.038 [2024-11-20 17:56:48.171293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.038 [2024-11-20 17:56:48.171311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:21.038 [2024-11-20 17:56:48.171320] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:21.038 [2024-11-20 17:56:48.171329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:21.038 [2024-11-20 17:56:48.171338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:21.038 [2024-11-20 17:56:48.171347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:21.038 [2024-11-20 17:56:48.171356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:21.038 [2024-11-20 17:56:48.171373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:21.038 [2024-11-20 17:56:48.171383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171391] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:21.038 [2024-11-20 17:56:48.171403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:21.038 [2024-11-20 17:56:48.171413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:21.038 [2024-11-20 17:56:48.171436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:21.038 [2024-11-20 17:56:48.171446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:21.038 [2024-11-20 17:56:48.171455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:21.038 [2024-11-20 17:56:48.171465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:21.038 [2024-11-20 17:56:48.171474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:21.038 [2024-11-20 17:56:48.171483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:21.038 [2024-11-20 17:56:48.171494] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:21.038 [2024-11-20 17:56:48.171506] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:21.038 [2024-11-20 17:56:48.171528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:21.038 [2024-11-20 17:56:48.171538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:21.038 [2024-11-20 17:56:48.171549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:21.038 [2024-11-20 17:56:48.171559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:21.038 [2024-11-20 17:56:48.171569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:21.038 [2024-11-20 17:56:48.171580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:21.038 [2024-11-20 17:56:48.171590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:21.038 [2024-11-20 17:56:48.171601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:21.038 [2024-11-20 17:56:48.171612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:21.038 [2024-11-20 17:56:48.171664] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:21.038 [2024-11-20 17:56:48.171675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:21.038 [2024-11-20 17:56:48.171698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:21.038 [2024-11-20 17:56:48.171708] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:21.038 [2024-11-20 17:56:48.171720] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:21.038 [2024-11-20 17:56:48.171731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.038 [2024-11-20 17:56:48.171741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:21.038 [2024-11-20 17:56:48.171755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:22:21.038 [2024-11-20 17:56:48.171777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.211352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.211562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:21.298 [2024-11-20 17:56:48.211585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.586 ms 00:22:21.298 [2024-11-20 17:56:48.211596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.211724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.211742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:21.298 [2024-11-20 17:56:48.211754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:21.298 [2024-11-20 17:56:48.211764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.265022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.265181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:21.298 [2024-11-20 17:56:48.265204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.301 ms 00:22:21.298 [2024-11-20 17:56:48.265220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.265318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.265331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:21.298 [2024-11-20 17:56:48.265344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:21.298 [2024-11-20 17:56:48.265354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.265814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.265828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:21.298 [2024-11-20 17:56:48.265840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.439 ms 00:22:21.298 [2024-11-20 17:56:48.265856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.265973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.265987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:21.298 [2024-11-20 17:56:48.265998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:22:21.298 [2024-11-20 17:56:48.266008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.284906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.284942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:21.298 [2024-11-20 17:56:48.284956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.906 ms 00:22:21.298 [2024-11-20 17:56:48.284967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.304472] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:22:21.298 [2024-11-20 17:56:48.304512] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:21.298 [2024-11-20 17:56:48.304527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.304538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:21.298 [2024-11-20 17:56:48.304550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.489 ms 00:22:21.298 [2024-11-20 17:56:48.304559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.334041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.334091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:21.298 [2024-11-20 17:56:48.334105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.447 ms 00:22:21.298 [2024-11-20 17:56:48.334116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.352195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.352232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:21.298 [2024-11-20 17:56:48.352244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.026 ms 00:22:21.298 [2024-11-20 17:56:48.352253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.369885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.370042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:21.298 [2024-11-20 17:56:48.370063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.569 ms 00:22:21.298 [2024-11-20 17:56:48.370073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.370847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.370871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:21.298 [2024-11-20 17:56:48.370883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.660 ms 00:22:21.298 [2024-11-20 17:56:48.370893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.456348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.298 [2024-11-20 17:56:48.456416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:21.298 [2024-11-20 17:56:48.456433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.565 ms 00:22:21.298 [2024-11-20 17:56:48.456444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.298 [2024-11-20 17:56:48.467114] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:21.558 [2024-11-20 17:56:48.483359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.483404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:21.558 [2024-11-20 17:56:48.483422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:22:21.558 [2024-11-20 17:56:48.483437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.483568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.483581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:21.558 [2024-11-20 17:56:48.483593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:21.558 [2024-11-20 17:56:48.483603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.483656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.483668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:21.558 [2024-11-20 17:56:48.483678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:21.558 [2024-11-20 17:56:48.483689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.483727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.483740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:21.558 [2024-11-20 17:56:48.483752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:21.558 [2024-11-20 17:56:48.483762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.483820] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:21.558 [2024-11-20 17:56:48.483832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.483843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:21.558 [2024-11-20 17:56:48.483854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:21.558 [2024-11-20 17:56:48.483864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.520253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.520310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:21.558 [2024-11-20 17:56:48.520325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.424 ms 00:22:21.558 [2024-11-20 17:56:48.520336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.520455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.558 [2024-11-20 17:56:48.520470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:21.558 [2024-11-20 17:56:48.520481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:21.558 [2024-11-20 17:56:48.520492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.558 [2024-11-20 17:56:48.521381] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.558 [2024-11-20 17:56:48.525526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.952 ms, result 0 00:22:21.558 [2024-11-20 17:56:48.526414] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.558 [2024-11-20 17:56:48.545014] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:21.558  [2024-11-20T17:56:48.734Z] Copying: 4096/4096 [kB] (average 23 MBps)[2024-11-20 17:56:48.722713] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:21.818 [2024-11-20 17:56:48.736556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.818 [2024-11-20 17:56:48.736709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:21.818 [2024-11-20 17:56:48.736796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:21.818 [2024-11-20 17:56:48.736840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.818 [2024-11-20 17:56:48.736891] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:21.818 [2024-11-20 17:56:48.741231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.818 [2024-11-20 17:56:48.741356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:21.818 [2024-11-20 17:56:48.741454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.291 ms 00:22:21.818 [2024-11-20 17:56:48.741489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.743306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.743431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:21.819 [2024-11-20 17:56:48.743504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.770 ms 00:22:21.819 [2024-11-20 17:56:48.743539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.746762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.746900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:21.819 [2024-11-20 17:56:48.746975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.184 ms 00:22:21.819 [2024-11-20 17:56:48.747009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.752671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.752817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:21.819 [2024-11-20 17:56:48.752890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.613 ms 00:22:21.819 [2024-11-20 17:56:48.752924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.789700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.789854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:21.819 [2024-11-20 17:56:48.789927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.762 ms 00:22:21.819 [2024-11-20 17:56:48.789962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.811654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.811817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:21.819 [2024-11-20 17:56:48.811903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.636 ms 00:22:21.819 [2024-11-20 17:56:48.811941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.812126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.812171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:21.819 [2024-11-20 17:56:48.812203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:22:21.819 [2024-11-20 17:56:48.812274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.849689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.849870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:21.819 [2024-11-20 17:56:48.850030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.415 ms 00:22:21.819 [2024-11-20 17:56:48.850067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.886731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.886786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:21.819 [2024-11-20 17:56:48.886801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.647 ms 00:22:21.819 [2024-11-20 17:56:48.886811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.923042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.923080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:21.819 [2024-11-20 17:56:48.923094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.229 ms 00:22:21.819 [2024-11-20 17:56:48.923104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.958911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.819 [2024-11-20 17:56:48.958948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:21.819 [2024-11-20 17:56:48.958961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.784 ms 00:22:21.819 [2024-11-20 17:56:48.958988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.819 [2024-11-20 17:56:48.959042] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:21.819 [2024-11-20 17:56:48.959059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:21.819 [2024-11-20 17:56:48.959639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.959994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:21.820 [2024-11-20 17:56:48.960163] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:21.820 [2024-11-20 17:56:48.960173] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:22:21.820 [2024-11-20 17:56:48.960184] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:21.820 [2024-11-20 17:56:48.960194] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:21.820 [2024-11-20 17:56:48.960204] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:21.820 [2024-11-20 17:56:48.960214] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:21.820 [2024-11-20 17:56:48.960224] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:21.820 [2024-11-20 17:56:48.960234] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:21.820 [2024-11-20 17:56:48.960245] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:21.820 [2024-11-20 17:56:48.960254] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:21.820 [2024-11-20 17:56:48.960263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:21.820 [2024-11-20 17:56:48.960272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.820 [2024-11-20 17:56:48.960287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:21.820 [2024-11-20 17:56:48.960298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.233 ms 00:22:21.820 [2024-11-20 17:56:48.960308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.820 [2024-11-20 17:56:48.980458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.820 [2024-11-20 17:56:48.980589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:21.820 [2024-11-20 17:56:48.980700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.160 ms 00:22:21.820 [2024-11-20 17:56:48.980737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:21.820 [2024-11-20 17:56:48.981341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:21.820 [2024-11-20 17:56:48.981446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:21.820 [2024-11-20 17:56:48.981519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:22:21.820 [2024-11-20 17:56:48.981555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.080 [2024-11-20 17:56:49.035844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.080 [2024-11-20 17:56:49.035983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:22.080 [2024-11-20 17:56:49.036063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.080 [2024-11-20 17:56:49.036100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.080 [2024-11-20 17:56:49.036224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.080 [2024-11-20 17:56:49.036261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:22.080 [2024-11-20 17:56:49.036292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.080 [2024-11-20 17:56:49.036322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.080 [2024-11-20 17:56:49.036464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.080 [2024-11-20 17:56:49.036505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:22.080 [2024-11-20 17:56:49.036537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.080 [2024-11-20 17:56:49.036567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.080 [2024-11-20 17:56:49.036608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.080 [2024-11-20 17:56:49.036724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:22.080 [2024-11-20 17:56:49.036756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.080 [2024-11-20 17:56:49.036809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.080 [2024-11-20 17:56:49.158888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.080 [2024-11-20 17:56:49.159090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:22.080 [2024-11-20 17:56:49.159215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.080 [2024-11-20 17:56:49.159252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.338 [2024-11-20 17:56:49.259909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.338 [2024-11-20 17:56:49.260099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:22.338 [2024-11-20 17:56:49.260214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.338 [2024-11-20 17:56:49.260251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.338 [2024-11-20 17:56:49.260369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.338 [2024-11-20 17:56:49.260458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:22.338 [2024-11-20 17:56:49.260476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.338 [2024-11-20 17:56:49.260487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.338 [2024-11-20 17:56:49.260527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.338 [2024-11-20 17:56:49.260538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:22.338 [2024-11-20 17:56:49.260555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.338 [2024-11-20 17:56:49.260565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.338 [2024-11-20 17:56:49.260678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.339 [2024-11-20 17:56:49.260692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:22.339 [2024-11-20 17:56:49.260702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.339 [2024-11-20 17:56:49.260712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.339 [2024-11-20 17:56:49.260748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.339 [2024-11-20 17:56:49.260761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:22.339 [2024-11-20 17:56:49.260803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.339 [2024-11-20 17:56:49.260813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.339 [2024-11-20 17:56:49.260854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.339 [2024-11-20 17:56:49.260865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:22.339 [2024-11-20 17:56:49.260876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.339 [2024-11-20 17:56:49.260886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.339 [2024-11-20 17:56:49.260928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:22.339 [2024-11-20 17:56:49.260940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:22.339 [2024-11-20 17:56:49.260954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:22.339 [2024-11-20 17:56:49.260964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:22.339 [2024-11-20 17:56:49.261099] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 525.383 ms, result 0 00:22:23.276 00:22:23.276 00:22:23.276 17:56:50 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78763 00:22:23.276 17:56:50 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:22:23.276 17:56:50 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78763 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78763 ']' 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.276 17:56:50 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:23.276 [2024-11-20 17:56:50.439901] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:23.276 [2024-11-20 17:56:50.440533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78763 ] 00:22:23.535 [2024-11-20 17:56:50.618449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.793 [2024-11-20 17:56:50.726707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:24.731 17:56:51 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:24.731 17:56:51 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:22:24.731 17:56:51 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:22:24.731 [2024-11-20 17:56:51.772868] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.731 [2024-11-20 17:56:51.773110] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:24.992 [2024-11-20 17:56:51.950369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.950427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:24.992 [2024-11-20 17:56:51.950447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:24.992 [2024-11-20 17:56:51.950458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.953530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.953569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:24.992 [2024-11-20 17:56:51.953584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:22:24.992 [2024-11-20 17:56:51.953595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.953707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:24.992 [2024-11-20 17:56:51.954784] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:24.992 [2024-11-20 17:56:51.954817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.954828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:24.992 [2024-11-20 17:56:51.954842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.124 ms 00:22:24.992 [2024-11-20 17:56:51.954853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.956333] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:24.992 [2024-11-20 17:56:51.975970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.976011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:24.992 [2024-11-20 17:56:51.976024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.674 ms 00:22:24.992 [2024-11-20 17:56:51.976036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.976127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.976143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:24.992 [2024-11-20 17:56:51.976154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:22:24.992 [2024-11-20 17:56:51.976166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.982910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.983104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:24.992 [2024-11-20 17:56:51.983124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.708 ms 00:22:24.992 [2024-11-20 17:56:51.983138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.983253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.983272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:24.992 [2024-11-20 17:56:51.983283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:24.992 [2024-11-20 17:56:51.983297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.983327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.983341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:24.992 [2024-11-20 17:56:51.983351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:24.992 [2024-11-20 17:56:51.983364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.983389] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:24.992 [2024-11-20 17:56:51.988210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.988240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:24.992 [2024-11-20 17:56:51.988255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.831 ms 00:22:24.992 [2024-11-20 17:56:51.988265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.988335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.988348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:24.992 [2024-11-20 17:56:51.988361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:24.992 [2024-11-20 17:56:51.988376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.988400] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:24.992 [2024-11-20 17:56:51.988422] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:24.992 [2024-11-20 17:56:51.988467] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:24.992 [2024-11-20 17:56:51.988486] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:24.992 [2024-11-20 17:56:51.988575] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:24.992 [2024-11-20 17:56:51.988589] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:24.992 [2024-11-20 17:56:51.988609] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:24.992 [2024-11-20 17:56:51.988623] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:24.992 [2024-11-20 17:56:51.988637] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:24.992 [2024-11-20 17:56:51.988648] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:24.992 [2024-11-20 17:56:51.988660] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:24.992 [2024-11-20 17:56:51.988671] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:24.992 [2024-11-20 17:56:51.988687] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:24.992 [2024-11-20 17:56:51.988698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.988710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:24.992 [2024-11-20 17:56:51.988721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:22:24.992 [2024-11-20 17:56:51.988733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.992 [2024-11-20 17:56:51.988831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.992 [2024-11-20 17:56:51.988846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:24.992 [2024-11-20 17:56:51.988858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:22:24.993 [2024-11-20 17:56:51.988870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:51.988973] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:24.993 [2024-11-20 17:56:51.988989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:24.993 [2024-11-20 17:56:51.989000] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:24.993 [2024-11-20 17:56:51.989048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989075] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:24.993 [2024-11-20 17:56:51.989085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.993 [2024-11-20 17:56:51.989107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:24.993 [2024-11-20 17:56:51.989119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:24.993 [2024-11-20 17:56:51.989127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:24.993 [2024-11-20 17:56:51.989139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:24.993 [2024-11-20 17:56:51.989149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:24.993 [2024-11-20 17:56:51.989161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:24.993 [2024-11-20 17:56:51.989181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:24.993 [2024-11-20 17:56:51.989219] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:24.993 [2024-11-20 17:56:51.989254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:24.993 [2024-11-20 17:56:51.989283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:24.993 [2024-11-20 17:56:51.989338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:24.993 [2024-11-20 17:56:51.989368] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989381] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.993 [2024-11-20 17:56:51.989390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:24.993 [2024-11-20 17:56:51.989402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:24.993 [2024-11-20 17:56:51.989411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:24.993 [2024-11-20 17:56:51.989422] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:24.993 [2024-11-20 17:56:51.989432] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:24.993 [2024-11-20 17:56:51.989445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:24.993 [2024-11-20 17:56:51.989467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:24.993 [2024-11-20 17:56:51.989477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989488] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:24.993 [2024-11-20 17:56:51.989501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:24.993 [2024-11-20 17:56:51.989513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:24.993 [2024-11-20 17:56:51.989536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:24.993 [2024-11-20 17:56:51.989546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:24.993 [2024-11-20 17:56:51.989557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:24.993 [2024-11-20 17:56:51.989567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:24.993 [2024-11-20 17:56:51.989578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:24.993 [2024-11-20 17:56:51.989587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:24.993 [2024-11-20 17:56:51.989600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:24.993 [2024-11-20 17:56:51.989621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:24.993 [2024-11-20 17:56:51.989648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:24.993 [2024-11-20 17:56:51.989663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:24.993 [2024-11-20 17:56:51.989674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:24.993 [2024-11-20 17:56:51.989687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:24.993 [2024-11-20 17:56:51.989698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:24.993 [2024-11-20 17:56:51.989710] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:24.993 [2024-11-20 17:56:51.989720] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:24.993 [2024-11-20 17:56:51.989733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:24.993 [2024-11-20 17:56:51.989743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989801] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:24.993 [2024-11-20 17:56:51.989814] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:24.993 [2024-11-20 17:56:51.989825] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989841] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:24.993 [2024-11-20 17:56:51.989851] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:24.993 [2024-11-20 17:56:51.989864] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:24.993 [2024-11-20 17:56:51.989878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:24.993 [2024-11-20 17:56:51.989892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:51.989902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:24.993 [2024-11-20 17:56:51.989915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:22:24.993 [2024-11-20 17:56:51.989926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:52.028402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:52.028439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:24.993 [2024-11-20 17:56:52.028456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.472 ms 00:22:24.993 [2024-11-20 17:56:52.028470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:52.028589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:52.028602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:24.993 [2024-11-20 17:56:52.028616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:24.993 [2024-11-20 17:56:52.028627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:52.076924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:52.077097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:24.993 [2024-11-20 17:56:52.077125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.347 ms 00:22:24.993 [2024-11-20 17:56:52.077136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:52.077233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:52.077246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:24.993 [2024-11-20 17:56:52.077260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:24.993 [2024-11-20 17:56:52.077270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.993 [2024-11-20 17:56:52.077717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.993 [2024-11-20 17:56:52.077730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:24.993 [2024-11-20 17:56:52.077747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.424 ms 00:22:24.994 [2024-11-20 17:56:52.077757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.994 [2024-11-20 17:56:52.077898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.994 [2024-11-20 17:56:52.077912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:24.994 [2024-11-20 17:56:52.077926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:24.994 [2024-11-20 17:56:52.077935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.994 [2024-11-20 17:56:52.098720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.994 [2024-11-20 17:56:52.098756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:24.994 [2024-11-20 17:56:52.098788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.791 ms 00:22:24.994 [2024-11-20 17:56:52.098799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.994 [2024-11-20 17:56:52.129751] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:24.994 [2024-11-20 17:56:52.129919] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:24.994 [2024-11-20 17:56:52.129946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.994 [2024-11-20 17:56:52.129958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:24.994 [2024-11-20 17:56:52.129973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.078 ms 00:22:24.994 [2024-11-20 17:56:52.129984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:24.994 [2024-11-20 17:56:52.158880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:24.994 [2024-11-20 17:56:52.158929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:24.994 [2024-11-20 17:56:52.158947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.800 ms 00:22:24.994 [2024-11-20 17:56:52.158957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.177896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.177932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:25.253 [2024-11-20 17:56:52.177951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.881 ms 00:22:25.253 [2024-11-20 17:56:52.177961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.195349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.195509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:25.253 [2024-11-20 17:56:52.195534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.320 ms 00:22:25.253 [2024-11-20 17:56:52.195543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.196394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.196421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:25.253 [2024-11-20 17:56:52.196436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.705 ms 00:22:25.253 [2024-11-20 17:56:52.196447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.279987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.280050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:25.253 [2024-11-20 17:56:52.280070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.641 ms 00:22:25.253 [2024-11-20 17:56:52.280082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.291270] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:25.253 [2024-11-20 17:56:52.307483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.307539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:25.253 [2024-11-20 17:56:52.307559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.326 ms 00:22:25.253 [2024-11-20 17:56:52.307572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.307674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.307691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:25.253 [2024-11-20 17:56:52.307703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:25.253 [2024-11-20 17:56:52.307716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.307790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.307823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:25.253 [2024-11-20 17:56:52.307834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:22:25.253 [2024-11-20 17:56:52.307850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.307877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.307908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:25.253 [2024-11-20 17:56:52.307919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:25.253 [2024-11-20 17:56:52.307932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.307973] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:25.253 [2024-11-20 17:56:52.307992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.308003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:25.253 [2024-11-20 17:56:52.308020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:22:25.253 [2024-11-20 17:56:52.308030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.344819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.344857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:25.253 [2024-11-20 17:56:52.344875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.815 ms 00:22:25.253 [2024-11-20 17:56:52.344886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.345001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.253 [2024-11-20 17:56:52.345014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:25.253 [2024-11-20 17:56:52.345027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:22:25.253 [2024-11-20 17:56:52.345040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.253 [2024-11-20 17:56:52.345961] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:25.253 [2024-11-20 17:56:52.350029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.907 ms, result 0 00:22:25.254 [2024-11-20 17:56:52.351269] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:25.254 Some configs were skipped because the RPC state that can call them passed over. 00:22:25.254 17:56:52 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:22:25.512 [2024-11-20 17:56:52.603633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.512 [2024-11-20 17:56:52.603845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:25.512 [2024-11-20 17:56:52.603942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:22:25.512 [2024-11-20 17:56:52.603990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.512 [2024-11-20 17:56:52.604123] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.178 ms, result 0 00:22:25.512 true 00:22:25.512 17:56:52 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:22:25.772 [2024-11-20 17:56:52.819217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:25.772 [2024-11-20 17:56:52.819267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:22:25.772 [2024-11-20 17:56:52.819286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:22:25.772 [2024-11-20 17:56:52.819298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:25.772 [2024-11-20 17:56:52.819340] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.454 ms, result 0 00:22:25.772 true 00:22:25.772 17:56:52 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78763 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78763 ']' 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78763 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78763 00:22:25.772 killing process with pid 78763 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78763' 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78763 00:22:25.772 17:56:52 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78763 00:22:27.151 [2024-11-20 17:56:53.998235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.151 [2024-11-20 17:56:53.998300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:27.151 [2024-11-20 17:56:53.998316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:27.151 [2024-11-20 17:56:53.998329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.151 [2024-11-20 17:56:53.998371] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:27.151 [2024-11-20 17:56:54.002688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.151 [2024-11-20 17:56:54.002731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:27.151 [2024-11-20 17:56:54.002749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.300 ms 00:22:27.151 [2024-11-20 17:56:54.002771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.151 [2024-11-20 17:56:54.003031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.151 [2024-11-20 17:56:54.003046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:27.151 [2024-11-20 17:56:54.003059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:22:27.151 [2024-11-20 17:56:54.003069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.151 [2024-11-20 17:56:54.006328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.151 [2024-11-20 17:56:54.006364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:27.151 [2024-11-20 17:56:54.006382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.241 ms 00:22:27.151 [2024-11-20 17:56:54.006393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.151 [2024-11-20 17:56:54.011974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.151 [2024-11-20 17:56:54.012009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:27.151 [2024-11-20 17:56:54.012025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.549 ms 00:22:27.151 [2024-11-20 17:56:54.012035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.151 [2024-11-20 17:56:54.027238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.027275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:27.152 [2024-11-20 17:56:54.027294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.161 ms 00:22:27.152 [2024-11-20 17:56:54.027314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.038251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.038292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:27.152 [2024-11-20 17:56:54.038309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.897 ms 00:22:27.152 [2024-11-20 17:56:54.038320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.038446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.038461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:27.152 [2024-11-20 17:56:54.038475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:22:27.152 [2024-11-20 17:56:54.038485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.054147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.054183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:27.152 [2024-11-20 17:56:54.054198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.663 ms 00:22:27.152 [2024-11-20 17:56:54.054208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.069433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.069594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:27.152 [2024-11-20 17:56:54.069632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.196 ms 00:22:27.152 [2024-11-20 17:56:54.069643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.084462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.084620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:27.152 [2024-11-20 17:56:54.084649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.748 ms 00:22:27.152 [2024-11-20 17:56:54.084659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.099027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.152 [2024-11-20 17:56:54.099179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:27.152 [2024-11-20 17:56:54.099205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.309 ms 00:22:27.152 [2024-11-20 17:56:54.099216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.152 [2024-11-20 17:56:54.099287] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:27.152 [2024-11-20 17:56:54.099306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.099999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:27.152 [2024-11-20 17:56:54.100152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:27.153 [2024-11-20 17:56:54.100564] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:27.153 [2024-11-20 17:56:54.100581] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:22:27.153 [2024-11-20 17:56:54.100602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:27.153 [2024-11-20 17:56:54.100619] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:27.153 [2024-11-20 17:56:54.100629] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:27.153 [2024-11-20 17:56:54.100642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:27.153 [2024-11-20 17:56:54.100652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:27.153 [2024-11-20 17:56:54.100664] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:27.153 [2024-11-20 17:56:54.100674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:27.153 [2024-11-20 17:56:54.100685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:27.153 [2024-11-20 17:56:54.100695] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:27.153 [2024-11-20 17:56:54.100708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.153 [2024-11-20 17:56:54.100718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:27.153 [2024-11-20 17:56:54.100731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.425 ms 00:22:27.153 [2024-11-20 17:56:54.100741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.120505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.153 [2024-11-20 17:56:54.120539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:27.153 [2024-11-20 17:56:54.120558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.766 ms 00:22:27.153 [2024-11-20 17:56:54.120568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.121166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:27.153 [2024-11-20 17:56:54.121185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:27.153 [2024-11-20 17:56:54.121200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.543 ms 00:22:27.153 [2024-11-20 17:56:54.121215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.189340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.153 [2024-11-20 17:56:54.189376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:27.153 [2024-11-20 17:56:54.189392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.153 [2024-11-20 17:56:54.189403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.189489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.153 [2024-11-20 17:56:54.189503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:27.153 [2024-11-20 17:56:54.189516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.153 [2024-11-20 17:56:54.189530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.189582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.153 [2024-11-20 17:56:54.189595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:27.153 [2024-11-20 17:56:54.189621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.153 [2024-11-20 17:56:54.189631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.189653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.153 [2024-11-20 17:56:54.189664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:27.153 [2024-11-20 17:56:54.189676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.153 [2024-11-20 17:56:54.189687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.153 [2024-11-20 17:56:54.316542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.153 [2024-11-20 17:56:54.316603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:27.153 [2024-11-20 17:56:54.316621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.153 [2024-11-20 17:56:54.316648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.416728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.416804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:27.413 [2024-11-20 17:56:54.416824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.416838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.416954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.416967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:27.413 [2024-11-20 17:56:54.416985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.416995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.417039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:27.413 [2024-11-20 17:56:54.417052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.417063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.417194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:27.413 [2024-11-20 17:56:54.417208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.417219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.417271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:27.413 [2024-11-20 17:56:54.417284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.417295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.417353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:27.413 [2024-11-20 17:56:54.417369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.417380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:27.413 [2024-11-20 17:56:54.417438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:27.413 [2024-11-20 17:56:54.417451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:27.413 [2024-11-20 17:56:54.417461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:27.413 [2024-11-20 17:56:54.417603] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.024 ms, result 0 00:22:28.351 17:56:55 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:28.610 [2024-11-20 17:56:55.553646] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:28.610 [2024-11-20 17:56:55.553990] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78829 ] 00:22:28.610 [2024-11-20 17:56:55.733537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.870 [2024-11-20 17:56:55.849740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:29.128 [2024-11-20 17:56:56.211529] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.128 [2024-11-20 17:56:56.211794] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:29.388 [2024-11-20 17:56:56.373348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.373395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:29.388 [2024-11-20 17:56:56.373411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:29.388 [2024-11-20 17:56:56.373420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.376906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.377066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:29.388 [2024-11-20 17:56:56.377229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.471 ms 00:22:29.388 [2024-11-20 17:56:56.377270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.377386] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:29.388 [2024-11-20 17:56:56.378454] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:29.388 [2024-11-20 17:56:56.378491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.378504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:29.388 [2024-11-20 17:56:56.378516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:22:29.388 [2024-11-20 17:56:56.378527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.380043] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:29.388 [2024-11-20 17:56:56.399524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.399715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:29.388 [2024-11-20 17:56:56.399737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.513 ms 00:22:29.388 [2024-11-20 17:56:56.399749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.399863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.399881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:29.388 [2024-11-20 17:56:56.399892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:22:29.388 [2024-11-20 17:56:56.399903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.406693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.388 [2024-11-20 17:56:56.406857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:29.388 [2024-11-20 17:56:56.406893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.759 ms 00:22:29.388 [2024-11-20 17:56:56.406903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.388 [2024-11-20 17:56:56.407007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.407022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:29.389 [2024-11-20 17:56:56.407033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:22:29.389 [2024-11-20 17:56:56.407044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.407074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.407089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:29.389 [2024-11-20 17:56:56.407100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:22:29.389 [2024-11-20 17:56:56.407110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.407132] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:22:29.389 [2024-11-20 17:56:56.411824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.411854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:29.389 [2024-11-20 17:56:56.411867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.704 ms 00:22:29.389 [2024-11-20 17:56:56.411877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.411944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.411957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:29.389 [2024-11-20 17:56:56.411969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:29.389 [2024-11-20 17:56:56.411980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.412000] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:29.389 [2024-11-20 17:56:56.412026] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:29.389 [2024-11-20 17:56:56.412062] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:29.389 [2024-11-20 17:56:56.412079] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:29.389 [2024-11-20 17:56:56.412167] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:29.389 [2024-11-20 17:56:56.412181] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:29.389 [2024-11-20 17:56:56.412195] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:29.389 [2024-11-20 17:56:56.412208] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412223] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412234] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:22:29.389 [2024-11-20 17:56:56.412245] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:29.389 [2024-11-20 17:56:56.412255] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:29.389 [2024-11-20 17:56:56.412264] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:29.389 [2024-11-20 17:56:56.412275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.412286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:29.389 [2024-11-20 17:56:56.412296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:22:29.389 [2024-11-20 17:56:56.412306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.412381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.389 [2024-11-20 17:56:56.412396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:29.389 [2024-11-20 17:56:56.412406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:22:29.389 [2024-11-20 17:56:56.412416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.389 [2024-11-20 17:56:56.412507] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:29.389 [2024-11-20 17:56:56.412525] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:29.389 [2024-11-20 17:56:56.412537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:29.389 [2024-11-20 17:56:56.412567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:29.389 [2024-11-20 17:56:56.412596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.389 [2024-11-20 17:56:56.412617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:29.389 [2024-11-20 17:56:56.412627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:22:29.389 [2024-11-20 17:56:56.412636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:29.389 [2024-11-20 17:56:56.412656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:29.389 [2024-11-20 17:56:56.412666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:22:29.389 [2024-11-20 17:56:56.412675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:29.389 [2024-11-20 17:56:56.412694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:29.389 [2024-11-20 17:56:56.412721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:29.389 [2024-11-20 17:56:56.412750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:29.389 [2024-11-20 17:56:56.412792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:29.389 [2024-11-20 17:56:56.412820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:29.389 [2024-11-20 17:56:56.412848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.389 [2024-11-20 17:56:56.412867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:29.389 [2024-11-20 17:56:56.412876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:22:29.389 [2024-11-20 17:56:56.412886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:29.389 [2024-11-20 17:56:56.412895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:29.389 [2024-11-20 17:56:56.412904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:22:29.389 [2024-11-20 17:56:56.412913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:29.389 [2024-11-20 17:56:56.412931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:22:29.389 [2024-11-20 17:56:56.412940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412949] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:29.389 [2024-11-20 17:56:56.412959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:29.389 [2024-11-20 17:56:56.412970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:29.389 [2024-11-20 17:56:56.412983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:29.389 [2024-11-20 17:56:56.412993] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:29.389 [2024-11-20 17:56:56.413002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:29.389 [2024-11-20 17:56:56.413012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:29.389 [2024-11-20 17:56:56.413021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:29.389 [2024-11-20 17:56:56.413030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:29.389 [2024-11-20 17:56:56.413039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:29.389 [2024-11-20 17:56:56.413050] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:29.389 [2024-11-20 17:56:56.413062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.389 [2024-11-20 17:56:56.413074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:22:29.389 [2024-11-20 17:56:56.413084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:22:29.389 [2024-11-20 17:56:56.413095] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:22:29.389 [2024-11-20 17:56:56.413105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:22:29.389 [2024-11-20 17:56:56.413115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:22:29.389 [2024-11-20 17:56:56.413126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:22:29.389 [2024-11-20 17:56:56.413136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:22:29.389 [2024-11-20 17:56:56.413146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:22:29.389 [2024-11-20 17:56:56.413157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:22:29.390 [2024-11-20 17:56:56.413168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413208] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:22:29.390 [2024-11-20 17:56:56.413219] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:29.390 [2024-11-20 17:56:56.413230] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:29.390 [2024-11-20 17:56:56.413251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:29.390 [2024-11-20 17:56:56.413266] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:29.390 [2024-11-20 17:56:56.413277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:29.390 [2024-11-20 17:56:56.413288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.413299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:29.390 [2024-11-20 17:56:56.413313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:22:29.390 [2024-11-20 17:56:56.413322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.451512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.451549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:29.390 [2024-11-20 17:56:56.451563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.198 ms 00:22:29.390 [2024-11-20 17:56:56.451574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.451690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.451710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:29.390 [2024-11-20 17:56:56.451722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:22:29.390 [2024-11-20 17:56:56.451732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.509199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.509233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:29.390 [2024-11-20 17:56:56.509247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.537 ms 00:22:29.390 [2024-11-20 17:56:56.509261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.509349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.509362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:29.390 [2024-11-20 17:56:56.509374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:29.390 [2024-11-20 17:56:56.509384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.509858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.509874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:29.390 [2024-11-20 17:56:56.509885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.453 ms 00:22:29.390 [2024-11-20 17:56:56.509902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.510018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.510032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:29.390 [2024-11-20 17:56:56.510043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:22:29.390 [2024-11-20 17:56:56.510053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.529719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.529756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:29.390 [2024-11-20 17:56:56.529784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.675 ms 00:22:29.390 [2024-11-20 17:56:56.529796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.390 [2024-11-20 17:56:56.549394] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:22:29.390 [2024-11-20 17:56:56.549431] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:29.390 [2024-11-20 17:56:56.549446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.390 [2024-11-20 17:56:56.549457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:29.390 [2024-11-20 17:56:56.549468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.579 ms 00:22:29.390 [2024-11-20 17:56:56.549478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.578573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.578624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:29.650 [2024-11-20 17:56:56.578638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.066 ms 00:22:29.650 [2024-11-20 17:56:56.578649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.597249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.597287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:29.650 [2024-11-20 17:56:56.597300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:22:29.650 [2024-11-20 17:56:56.597311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.615430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.615469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:29.650 [2024-11-20 17:56:56.615483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.072 ms 00:22:29.650 [2024-11-20 17:56:56.615493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.616284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.616311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:29.650 [2024-11-20 17:56:56.616324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:22:29.650 [2024-11-20 17:56:56.616334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.702121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.702187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:29.650 [2024-11-20 17:56:56.702204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.897 ms 00:22:29.650 [2024-11-20 17:56:56.702215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.712891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:22:29.650 [2024-11-20 17:56:56.728889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.728938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:29.650 [2024-11-20 17:56:56.728970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.611 ms 00:22:29.650 [2024-11-20 17:56:56.728988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.729114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.729128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:29.650 [2024-11-20 17:56:56.729140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:29.650 [2024-11-20 17:56:56.729150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.729203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.729214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:29.650 [2024-11-20 17:56:56.729225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:29.650 [2024-11-20 17:56:56.729235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.729273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.729287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:29.650 [2024-11-20 17:56:56.729298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:22:29.650 [2024-11-20 17:56:56.729308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.729345] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:29.650 [2024-11-20 17:56:56.729357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.729368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:29.650 [2024-11-20 17:56:56.729379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:29.650 [2024-11-20 17:56:56.729389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.765357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.765401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:29.650 [2024-11-20 17:56:56.765416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.004 ms 00:22:29.650 [2024-11-20 17:56:56.765427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.765547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:29.650 [2024-11-20 17:56:56.765562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:29.650 [2024-11-20 17:56:56.765573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:29.650 [2024-11-20 17:56:56.765584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:29.650 [2024-11-20 17:56:56.766593] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:29.650 [2024-11-20 17:56:56.770804] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 393.594 ms, result 0 00:22:29.650 [2024-11-20 17:56:56.771538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:29.650 [2024-11-20 17:56:56.789706] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:31.032  [2024-11-20T17:56:59.146Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T17:57:00.081Z] Copying: 51/256 [MB] (23 MBps) [2024-11-20T17:57:01.017Z] Copying: 75/256 [MB] (24 MBps) [2024-11-20T17:57:01.955Z] Copying: 100/256 [MB] (25 MBps) [2024-11-20T17:57:02.891Z] Copying: 125/256 [MB] (24 MBps) [2024-11-20T17:57:04.265Z] Copying: 150/256 [MB] (24 MBps) [2024-11-20T17:57:05.200Z] Copying: 174/256 [MB] (24 MBps) [2024-11-20T17:57:06.136Z] Copying: 199/256 [MB] (25 MBps) [2024-11-20T17:57:07.072Z] Copying: 225/256 [MB] (25 MBps) [2024-11-20T17:57:07.072Z] Copying: 250/256 [MB] (25 MBps) [2024-11-20T17:57:07.642Z] Copying: 256/256 [MB] (average 25 MBps)[2024-11-20 17:57:07.468613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:40.466 [2024-11-20 17:57:07.493459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.493533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:40.466 [2024-11-20 17:57:07.493555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:40.466 [2024-11-20 17:57:07.493584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.493631] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:22:40.466 [2024-11-20 17:57:07.497245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.497286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:40.466 [2024-11-20 17:57:07.497303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.596 ms 00:22:40.466 [2024-11-20 17:57:07.497319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.497615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.497643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:40.466 [2024-11-20 17:57:07.497660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:22:40.466 [2024-11-20 17:57:07.497675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.500668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.500707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:40.466 [2024-11-20 17:57:07.500723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.973 ms 00:22:40.466 [2024-11-20 17:57:07.500738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.507172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.507220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:40.466 [2024-11-20 17:57:07.507235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.413 ms 00:22:40.466 [2024-11-20 17:57:07.507245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.545879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.545946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:40.466 [2024-11-20 17:57:07.545964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.607 ms 00:22:40.466 [2024-11-20 17:57:07.545975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.567501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.567556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:40.466 [2024-11-20 17:57:07.567574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.468 ms 00:22:40.466 [2024-11-20 17:57:07.567585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.567739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.567754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:40.466 [2024-11-20 17:57:07.567765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:22:40.466 [2024-11-20 17:57:07.567796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.466 [2024-11-20 17:57:07.604970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.466 [2024-11-20 17:57:07.605012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:40.466 [2024-11-20 17:57:07.605026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.203 ms 00:22:40.466 [2024-11-20 17:57:07.605036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.724 [2024-11-20 17:57:07.641059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.724 [2024-11-20 17:57:07.641099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:40.724 [2024-11-20 17:57:07.641112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.023 ms 00:22:40.724 [2024-11-20 17:57:07.641122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.724 [2024-11-20 17:57:07.676416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.724 [2024-11-20 17:57:07.676454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:40.724 [2024-11-20 17:57:07.676467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.295 ms 00:22:40.724 [2024-11-20 17:57:07.676476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.724 [2024-11-20 17:57:07.710949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.725 [2024-11-20 17:57:07.710988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:40.725 [2024-11-20 17:57:07.711001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.427 ms 00:22:40.725 [2024-11-20 17:57:07.711010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.725 [2024-11-20 17:57:07.711066] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:40.725 [2024-11-20 17:57:07.711084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:40.725 [2024-11-20 17:57:07.711996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:40.726 [2024-11-20 17:57:07.712185] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:40.726 [2024-11-20 17:57:07.712195] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 5259f8b1-3ab3-43ba-9a28-e53cd5fd0400 00:22:40.726 [2024-11-20 17:57:07.712206] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:40.726 [2024-11-20 17:57:07.712216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:40.726 [2024-11-20 17:57:07.712226] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:40.726 [2024-11-20 17:57:07.712237] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:40.726 [2024-11-20 17:57:07.712247] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:40.726 [2024-11-20 17:57:07.712257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:40.726 [2024-11-20 17:57:07.712266] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:40.726 [2024-11-20 17:57:07.712276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:40.726 [2024-11-20 17:57:07.712286] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:40.726 [2024-11-20 17:57:07.712296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.726 [2024-11-20 17:57:07.712310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:40.726 [2024-11-20 17:57:07.712321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.234 ms 00:22:40.726 [2024-11-20 17:57:07.712332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.732278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.726 [2024-11-20 17:57:07.732317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:40.726 [2024-11-20 17:57:07.732330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.957 ms 00:22:40.726 [2024-11-20 17:57:07.732341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.732899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:40.726 [2024-11-20 17:57:07.732923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:40.726 [2024-11-20 17:57:07.732934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:22:40.726 [2024-11-20 17:57:07.732945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.787911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.726 [2024-11-20 17:57:07.787952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:40.726 [2024-11-20 17:57:07.787966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.726 [2024-11-20 17:57:07.787977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.788093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.726 [2024-11-20 17:57:07.788106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:40.726 [2024-11-20 17:57:07.788119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.726 [2024-11-20 17:57:07.788129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.788179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.726 [2024-11-20 17:57:07.788192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:40.726 [2024-11-20 17:57:07.788203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.726 [2024-11-20 17:57:07.788214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.726 [2024-11-20 17:57:07.788233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.726 [2024-11-20 17:57:07.788247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:40.726 [2024-11-20 17:57:07.788257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.726 [2024-11-20 17:57:07.788267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.985 [2024-11-20 17:57:07.913287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.985 [2024-11-20 17:57:07.913345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:40.985 [2024-11-20 17:57:07.913360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.985 [2024-11-20 17:57:07.913371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.985 [2024-11-20 17:57:08.013904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.985 [2024-11-20 17:57:08.013963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:40.985 [2024-11-20 17:57:08.013979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.985 [2024-11-20 17:57:08.013990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.985 [2024-11-20 17:57:08.014093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.985 [2024-11-20 17:57:08.014107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:40.985 [2024-11-20 17:57:08.014118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.985 [2024-11-20 17:57:08.014128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.985 [2024-11-20 17:57:08.014158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.985 [2024-11-20 17:57:08.014169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:40.985 [2024-11-20 17:57:08.014184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.985 [2024-11-20 17:57:08.014195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.985 [2024-11-20 17:57:08.014308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.985 [2024-11-20 17:57:08.014322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:40.986 [2024-11-20 17:57:08.014333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.986 [2024-11-20 17:57:08.014343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.986 [2024-11-20 17:57:08.014380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.986 [2024-11-20 17:57:08.014394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:40.986 [2024-11-20 17:57:08.014404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.986 [2024-11-20 17:57:08.014418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.986 [2024-11-20 17:57:08.014459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.986 [2024-11-20 17:57:08.014471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:40.986 [2024-11-20 17:57:08.014481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.986 [2024-11-20 17:57:08.014491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.986 [2024-11-20 17:57:08.014533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:40.986 [2024-11-20 17:57:08.014546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:40.986 [2024-11-20 17:57:08.014560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:40.986 [2024-11-20 17:57:08.014570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:40.986 [2024-11-20 17:57:08.014708] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.116 ms, result 0 00:22:41.923 00:22:41.923 00:22:41.923 17:57:09 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:42.492 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:22:42.492 17:57:09 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78763 00:22:42.492 Process with pid 78763 is not found 00:22:42.492 17:57:09 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78763 ']' 00:22:42.492 17:57:09 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78763 00:22:42.492 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78763) - No such process 00:22:42.492 17:57:09 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78763 is not found' 00:22:42.492 00:22:42.492 real 1m9.591s 00:22:42.492 user 1m32.479s 00:22:42.492 sys 0m6.735s 00:22:42.492 17:57:09 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.492 17:57:09 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:22:42.492 ************************************ 00:22:42.492 END TEST ftl_trim 00:22:42.492 ************************************ 00:22:42.750 17:57:09 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:42.750 17:57:09 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:22:42.750 17:57:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.750 17:57:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:42.750 ************************************ 00:22:42.750 START TEST ftl_restore 00:22:42.750 ************************************ 00:22:42.750 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:22:42.750 * Looking for test storage... 00:22:42.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:42.750 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:42.750 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:22:42.750 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:42.750 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.750 17:57:09 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.751 17:57:09 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:43.009 17:57:09 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:22:43.009 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:43.009 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.009 --rc genhtml_branch_coverage=1 00:22:43.009 --rc genhtml_function_coverage=1 00:22:43.009 --rc genhtml_legend=1 00:22:43.009 --rc geninfo_all_blocks=1 00:22:43.009 --rc geninfo_unexecuted_blocks=1 00:22:43.009 00:22:43.009 ' 00:22:43.009 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.009 --rc genhtml_branch_coverage=1 00:22:43.009 --rc genhtml_function_coverage=1 00:22:43.009 --rc genhtml_legend=1 00:22:43.009 --rc geninfo_all_blocks=1 00:22:43.009 --rc geninfo_unexecuted_blocks=1 00:22:43.009 00:22:43.009 ' 00:22:43.009 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.009 --rc genhtml_branch_coverage=1 00:22:43.009 --rc genhtml_function_coverage=1 00:22:43.009 --rc genhtml_legend=1 00:22:43.009 --rc geninfo_all_blocks=1 00:22:43.009 --rc geninfo_unexecuted_blocks=1 00:22:43.009 00:22:43.009 ' 00:22:43.009 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:43.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:43.009 --rc genhtml_branch_coverage=1 00:22:43.009 --rc genhtml_function_coverage=1 00:22:43.009 --rc genhtml_legend=1 00:22:43.009 --rc geninfo_all_blocks=1 00:22:43.009 --rc geninfo_unexecuted_blocks=1 00:22:43.009 00:22:43.009 ' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.JMCg4V80U5 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:22:43.009 17:57:09 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79038 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:43.010 17:57:09 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79038 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79038 ']' 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:43.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:43.010 17:57:09 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:43.010 [2024-11-20 17:57:10.086790] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:22:43.010 [2024-11-20 17:57:10.086916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79038 ] 00:22:43.267 [2024-11-20 17:57:10.266916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.267 [2024-11-20 17:57:10.376205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:44.199 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:44.199 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:22:44.199 17:57:11 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:44.457 17:57:11 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:44.457 17:57:11 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:22:44.457 17:57:11 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:44.457 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:22:44.457 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:44.457 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:44.457 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:44.457 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:44.716 { 00:22:44.716 "name": "nvme0n1", 00:22:44.716 "aliases": [ 00:22:44.716 "9e36c0db-610b-4c9f-8511-3b8f460ff0d5" 00:22:44.716 ], 00:22:44.716 "product_name": "NVMe disk", 00:22:44.716 "block_size": 4096, 00:22:44.716 "num_blocks": 1310720, 00:22:44.716 "uuid": "9e36c0db-610b-4c9f-8511-3b8f460ff0d5", 00:22:44.716 "numa_id": -1, 00:22:44.716 "assigned_rate_limits": { 00:22:44.716 "rw_ios_per_sec": 0, 00:22:44.716 "rw_mbytes_per_sec": 0, 00:22:44.716 "r_mbytes_per_sec": 0, 00:22:44.716 "w_mbytes_per_sec": 0 00:22:44.716 }, 00:22:44.716 "claimed": true, 00:22:44.716 "claim_type": "read_many_write_one", 00:22:44.716 "zoned": false, 00:22:44.716 "supported_io_types": { 00:22:44.716 "read": true, 00:22:44.716 "write": true, 00:22:44.716 "unmap": true, 00:22:44.716 "flush": true, 00:22:44.716 "reset": true, 00:22:44.716 "nvme_admin": true, 00:22:44.716 "nvme_io": true, 00:22:44.716 "nvme_io_md": false, 00:22:44.716 "write_zeroes": true, 00:22:44.716 "zcopy": false, 00:22:44.716 "get_zone_info": false, 00:22:44.716 "zone_management": false, 00:22:44.716 "zone_append": false, 00:22:44.716 "compare": true, 00:22:44.716 "compare_and_write": false, 00:22:44.716 "abort": true, 00:22:44.716 "seek_hole": false, 00:22:44.716 "seek_data": false, 00:22:44.716 "copy": true, 00:22:44.716 "nvme_iov_md": false 00:22:44.716 }, 00:22:44.716 "driver_specific": { 00:22:44.716 "nvme": [ 00:22:44.716 { 00:22:44.716 "pci_address": "0000:00:11.0", 00:22:44.716 "trid": { 00:22:44.716 "trtype": "PCIe", 00:22:44.716 "traddr": "0000:00:11.0" 00:22:44.716 }, 00:22:44.716 "ctrlr_data": { 00:22:44.716 "cntlid": 0, 00:22:44.716 "vendor_id": "0x1b36", 00:22:44.716 "model_number": "QEMU NVMe Ctrl", 00:22:44.716 "serial_number": "12341", 00:22:44.716 "firmware_revision": "8.0.0", 00:22:44.716 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:44.716 "oacs": { 00:22:44.716 "security": 0, 00:22:44.716 "format": 1, 00:22:44.716 "firmware": 0, 00:22:44.716 "ns_manage": 1 00:22:44.716 }, 00:22:44.716 "multi_ctrlr": false, 00:22:44.716 "ana_reporting": false 00:22:44.716 }, 00:22:44.716 "vs": { 00:22:44.716 "nvme_version": "1.4" 00:22:44.716 }, 00:22:44.716 "ns_data": { 00:22:44.716 "id": 1, 00:22:44.716 "can_share": false 00:22:44.716 } 00:22:44.716 } 00:22:44.716 ], 00:22:44.716 "mp_policy": "active_passive" 00:22:44.716 } 00:22:44.716 } 00:22:44.716 ]' 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:22:44.716 17:57:11 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:22:44.716 17:57:11 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:22:44.716 17:57:11 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:44.716 17:57:11 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:22:44.716 17:57:11 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:44.716 17:57:11 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:44.974 17:57:12 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=4194be54-3e5a-45b3-a54b-98b81a8b0659 00:22:44.974 17:57:12 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:22:44.974 17:57:12 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4194be54-3e5a-45b3-a54b-98b81a8b0659 00:22:45.232 17:57:12 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:45.490 17:57:12 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=053f174a-0dd1-4af1-b80f-a4131c49976c 00:22:45.490 17:57:12 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 053f174a-0dd1-4af1-b80f-a4131c49976c 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:22:45.748 17:57:12 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:45.748 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:45.748 { 00:22:45.748 "name": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:45.748 "aliases": [ 00:22:45.748 "lvs/nvme0n1p0" 00:22:45.748 ], 00:22:45.748 "product_name": "Logical Volume", 00:22:45.748 "block_size": 4096, 00:22:45.748 "num_blocks": 26476544, 00:22:45.748 "uuid": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:45.748 "assigned_rate_limits": { 00:22:45.748 "rw_ios_per_sec": 0, 00:22:45.748 "rw_mbytes_per_sec": 0, 00:22:45.748 "r_mbytes_per_sec": 0, 00:22:45.748 "w_mbytes_per_sec": 0 00:22:45.748 }, 00:22:45.748 "claimed": false, 00:22:45.748 "zoned": false, 00:22:45.748 "supported_io_types": { 00:22:45.748 "read": true, 00:22:45.748 "write": true, 00:22:45.748 "unmap": true, 00:22:45.748 "flush": false, 00:22:45.748 "reset": true, 00:22:45.748 "nvme_admin": false, 00:22:45.748 "nvme_io": false, 00:22:45.748 "nvme_io_md": false, 00:22:45.748 "write_zeroes": true, 00:22:45.748 "zcopy": false, 00:22:45.748 "get_zone_info": false, 00:22:45.748 "zone_management": false, 00:22:45.748 "zone_append": false, 00:22:45.748 "compare": false, 00:22:45.748 "compare_and_write": false, 00:22:45.748 "abort": false, 00:22:45.748 "seek_hole": true, 00:22:45.748 "seek_data": true, 00:22:45.748 "copy": false, 00:22:45.748 "nvme_iov_md": false 00:22:45.748 }, 00:22:45.748 "driver_specific": { 00:22:45.748 "lvol": { 00:22:45.748 "lvol_store_uuid": "053f174a-0dd1-4af1-b80f-a4131c49976c", 00:22:45.748 "base_bdev": "nvme0n1", 00:22:45.748 "thin_provision": true, 00:22:45.748 "num_allocated_clusters": 0, 00:22:45.748 "snapshot": false, 00:22:45.748 "clone": false, 00:22:45.748 "esnap_clone": false 00:22:45.748 } 00:22:45.748 } 00:22:45.748 } 00:22:45.748 ]' 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:46.007 17:57:12 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:46.007 17:57:12 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:22:46.007 17:57:12 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:22:46.007 17:57:12 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:22:46.266 17:57:13 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:22:46.266 17:57:13 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:22:46.266 17:57:13 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:46.266 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:46.266 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:46.266 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:46.266 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:46.266 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:46.524 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:46.524 { 00:22:46.524 "name": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:46.524 "aliases": [ 00:22:46.524 "lvs/nvme0n1p0" 00:22:46.524 ], 00:22:46.524 "product_name": "Logical Volume", 00:22:46.524 "block_size": 4096, 00:22:46.524 "num_blocks": 26476544, 00:22:46.524 "uuid": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:46.524 "assigned_rate_limits": { 00:22:46.524 "rw_ios_per_sec": 0, 00:22:46.524 "rw_mbytes_per_sec": 0, 00:22:46.524 "r_mbytes_per_sec": 0, 00:22:46.524 "w_mbytes_per_sec": 0 00:22:46.524 }, 00:22:46.524 "claimed": false, 00:22:46.524 "zoned": false, 00:22:46.524 "supported_io_types": { 00:22:46.524 "read": true, 00:22:46.524 "write": true, 00:22:46.524 "unmap": true, 00:22:46.524 "flush": false, 00:22:46.524 "reset": true, 00:22:46.524 "nvme_admin": false, 00:22:46.524 "nvme_io": false, 00:22:46.524 "nvme_io_md": false, 00:22:46.524 "write_zeroes": true, 00:22:46.524 "zcopy": false, 00:22:46.524 "get_zone_info": false, 00:22:46.524 "zone_management": false, 00:22:46.524 "zone_append": false, 00:22:46.524 "compare": false, 00:22:46.524 "compare_and_write": false, 00:22:46.524 "abort": false, 00:22:46.524 "seek_hole": true, 00:22:46.524 "seek_data": true, 00:22:46.524 "copy": false, 00:22:46.524 "nvme_iov_md": false 00:22:46.524 }, 00:22:46.524 "driver_specific": { 00:22:46.524 "lvol": { 00:22:46.524 "lvol_store_uuid": "053f174a-0dd1-4af1-b80f-a4131c49976c", 00:22:46.524 "base_bdev": "nvme0n1", 00:22:46.524 "thin_provision": true, 00:22:46.524 "num_allocated_clusters": 0, 00:22:46.524 "snapshot": false, 00:22:46.524 "clone": false, 00:22:46.524 "esnap_clone": false 00:22:46.524 } 00:22:46.524 } 00:22:46.524 } 00:22:46.524 ]' 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:46.525 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:46.525 17:57:13 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:22:46.525 17:57:13 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:22:46.783 17:57:13 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:22:46.783 17:57:13 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:46.783 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:46.783 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:22:46.783 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:22:46.783 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:22:46.783 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 73116cea-f2ba-4d9a-a4ff-487c05390c58 00:22:47.042 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:22:47.042 { 00:22:47.042 "name": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:47.042 "aliases": [ 00:22:47.042 "lvs/nvme0n1p0" 00:22:47.042 ], 00:22:47.042 "product_name": "Logical Volume", 00:22:47.042 "block_size": 4096, 00:22:47.042 "num_blocks": 26476544, 00:22:47.042 "uuid": "73116cea-f2ba-4d9a-a4ff-487c05390c58", 00:22:47.042 "assigned_rate_limits": { 00:22:47.042 "rw_ios_per_sec": 0, 00:22:47.042 "rw_mbytes_per_sec": 0, 00:22:47.042 "r_mbytes_per_sec": 0, 00:22:47.042 "w_mbytes_per_sec": 0 00:22:47.042 }, 00:22:47.042 "claimed": false, 00:22:47.042 "zoned": false, 00:22:47.042 "supported_io_types": { 00:22:47.042 "read": true, 00:22:47.042 "write": true, 00:22:47.042 "unmap": true, 00:22:47.042 "flush": false, 00:22:47.042 "reset": true, 00:22:47.042 "nvme_admin": false, 00:22:47.042 "nvme_io": false, 00:22:47.042 "nvme_io_md": false, 00:22:47.042 "write_zeroes": true, 00:22:47.042 "zcopy": false, 00:22:47.042 "get_zone_info": false, 00:22:47.042 "zone_management": false, 00:22:47.042 "zone_append": false, 00:22:47.042 "compare": false, 00:22:47.042 "compare_and_write": false, 00:22:47.042 "abort": false, 00:22:47.042 "seek_hole": true, 00:22:47.042 "seek_data": true, 00:22:47.042 "copy": false, 00:22:47.042 "nvme_iov_md": false 00:22:47.042 }, 00:22:47.042 "driver_specific": { 00:22:47.042 "lvol": { 00:22:47.042 "lvol_store_uuid": "053f174a-0dd1-4af1-b80f-a4131c49976c", 00:22:47.042 "base_bdev": "nvme0n1", 00:22:47.042 "thin_provision": true, 00:22:47.042 "num_allocated_clusters": 0, 00:22:47.042 "snapshot": false, 00:22:47.042 "clone": false, 00:22:47.042 "esnap_clone": false 00:22:47.042 } 00:22:47.042 } 00:22:47.042 } 00:22:47.042 ]' 00:22:47.042 17:57:13 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:22:47.042 17:57:14 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:22:47.042 17:57:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:22:47.042 17:57:14 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:22:47.042 17:57:14 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:22:47.042 17:57:14 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 73116cea-f2ba-4d9a-a4ff-487c05390c58 --l2p_dram_limit 10' 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:22:47.042 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:22:47.042 17:57:14 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 73116cea-f2ba-4d9a-a4ff-487c05390c58 --l2p_dram_limit 10 -c nvc0n1p0 00:22:47.302 [2024-11-20 17:57:14.268655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.302 [2024-11-20 17:57:14.268715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:47.302 [2024-11-20 17:57:14.268735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:47.302 [2024-11-20 17:57:14.268746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.302 [2024-11-20 17:57:14.268851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.302 [2024-11-20 17:57:14.268867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:47.302 [2024-11-20 17:57:14.268881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:22:47.302 [2024-11-20 17:57:14.268892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.302 [2024-11-20 17:57:14.268918] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:47.303 [2024-11-20 17:57:14.269972] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:47.303 [2024-11-20 17:57:14.270012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.270023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:47.303 [2024-11-20 17:57:14.270037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:22:47.303 [2024-11-20 17:57:14.270048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.270234] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8837d17e-478f-4076-a8da-ed3abf6761e2 00:22:47.303 [2024-11-20 17:57:14.271652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.271691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:22:47.303 [2024-11-20 17:57:14.271704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:47.303 [2024-11-20 17:57:14.271720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.279128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.279166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:47.303 [2024-11-20 17:57:14.279179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.352 ms 00:22:47.303 [2024-11-20 17:57:14.279192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.279294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.279311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:47.303 [2024-11-20 17:57:14.279324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:47.303 [2024-11-20 17:57:14.279342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.279404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.279426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:47.303 [2024-11-20 17:57:14.279437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:47.303 [2024-11-20 17:57:14.279454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.279481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:47.303 [2024-11-20 17:57:14.284546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.284587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:47.303 [2024-11-20 17:57:14.284605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.078 ms 00:22:47.303 [2024-11-20 17:57:14.284616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.284656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.284668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:47.303 [2024-11-20 17:57:14.284681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:47.303 [2024-11-20 17:57:14.284691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.284741] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:22:47.303 [2024-11-20 17:57:14.284884] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:47.303 [2024-11-20 17:57:14.284911] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:47.303 [2024-11-20 17:57:14.284927] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:47.303 [2024-11-20 17:57:14.284944] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:47.303 [2024-11-20 17:57:14.284956] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:47.303 [2024-11-20 17:57:14.284970] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:47.303 [2024-11-20 17:57:14.284981] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:47.303 [2024-11-20 17:57:14.284996] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:47.303 [2024-11-20 17:57:14.285007] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:47.303 [2024-11-20 17:57:14.285019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.285030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:47.303 [2024-11-20 17:57:14.285043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:22:47.303 [2024-11-20 17:57:14.285065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.285145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.303 [2024-11-20 17:57:14.285162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:47.303 [2024-11-20 17:57:14.285176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:47.303 [2024-11-20 17:57:14.285187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.303 [2024-11-20 17:57:14.285287] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:47.303 [2024-11-20 17:57:14.285304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:47.303 [2024-11-20 17:57:14.285318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:47.303 [2024-11-20 17:57:14.285353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:47.303 [2024-11-20 17:57:14.285387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.303 [2024-11-20 17:57:14.285409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:47.303 [2024-11-20 17:57:14.285419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:47.303 [2024-11-20 17:57:14.285430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:47.303 [2024-11-20 17:57:14.285441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:47.303 [2024-11-20 17:57:14.285453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:47.303 [2024-11-20 17:57:14.285463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:47.303 [2024-11-20 17:57:14.285487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:47.303 [2024-11-20 17:57:14.285522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:47.303 [2024-11-20 17:57:14.285553] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:47.303 [2024-11-20 17:57:14.285587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:47.303 [2024-11-20 17:57:14.285625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:47.303 [2024-11-20 17:57:14.285661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.303 [2024-11-20 17:57:14.285683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:47.303 [2024-11-20 17:57:14.285693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:47.303 [2024-11-20 17:57:14.285705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:47.303 [2024-11-20 17:57:14.285714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:47.303 [2024-11-20 17:57:14.285726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:47.303 [2024-11-20 17:57:14.285735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:47.303 [2024-11-20 17:57:14.285757] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:47.303 [2024-11-20 17:57:14.285777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285787] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:47.303 [2024-11-20 17:57:14.285800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:47.303 [2024-11-20 17:57:14.285812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:47.303 [2024-11-20 17:57:14.285838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:47.303 [2024-11-20 17:57:14.285853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:47.303 [2024-11-20 17:57:14.285863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:47.303 [2024-11-20 17:57:14.285875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:47.303 [2024-11-20 17:57:14.285884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:47.303 [2024-11-20 17:57:14.285896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:47.303 [2024-11-20 17:57:14.285911] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:47.303 [2024-11-20 17:57:14.285927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.285942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:47.304 [2024-11-20 17:57:14.285955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:47.304 [2024-11-20 17:57:14.285966] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:47.304 [2024-11-20 17:57:14.285979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:47.304 [2024-11-20 17:57:14.285989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:47.304 [2024-11-20 17:57:14.286002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:47.304 [2024-11-20 17:57:14.286012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:47.304 [2024-11-20 17:57:14.286025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:47.304 [2024-11-20 17:57:14.286036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:47.304 [2024-11-20 17:57:14.286052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:47.304 [2024-11-20 17:57:14.286111] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:47.304 [2024-11-20 17:57:14.286124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:47.304 [2024-11-20 17:57:14.286149] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:47.304 [2024-11-20 17:57:14.286160] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:47.304 [2024-11-20 17:57:14.286173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:47.304 [2024-11-20 17:57:14.286184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:47.304 [2024-11-20 17:57:14.286197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:47.304 [2024-11-20 17:57:14.286210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:22:47.304 [2024-11-20 17:57:14.286223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:47.304 [2024-11-20 17:57:14.286266] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:22:47.304 [2024-11-20 17:57:14.286285] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:22:51.491 [2024-11-20 17:57:18.102480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.102562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:22:51.491 [2024-11-20 17:57:18.102582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3822.410 ms 00:22:51.491 [2024-11-20 17:57:18.102596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.140689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.140748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:51.491 [2024-11-20 17:57:18.140765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.740 ms 00:22:51.491 [2024-11-20 17:57:18.140796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.140944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.140962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:51.491 [2024-11-20 17:57:18.140974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:22:51.491 [2024-11-20 17:57:18.140996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.183356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.183408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:51.491 [2024-11-20 17:57:18.183422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.387 ms 00:22:51.491 [2024-11-20 17:57:18.183435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.183473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.183491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:51.491 [2024-11-20 17:57:18.183503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:51.491 [2024-11-20 17:57:18.183515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.184015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.184045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:51.491 [2024-11-20 17:57:18.184057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:22:51.491 [2024-11-20 17:57:18.184070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.184169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.184184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:51.491 [2024-11-20 17:57:18.184197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:22:51.491 [2024-11-20 17:57:18.184213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.203831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.203876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:51.491 [2024-11-20 17:57:18.203906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.629 ms 00:22:51.491 [2024-11-20 17:57:18.203919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.225212] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:51.491 [2024-11-20 17:57:18.228448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.228482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:51.491 [2024-11-20 17:57:18.228499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.467 ms 00:22:51.491 [2024-11-20 17:57:18.228511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.326597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.326655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:22:51.491 [2024-11-20 17:57:18.326674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.207 ms 00:22:51.491 [2024-11-20 17:57:18.326685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.326876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.326894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:51.491 [2024-11-20 17:57:18.326912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:22:51.491 [2024-11-20 17:57:18.326922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.363149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.363191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:22:51.491 [2024-11-20 17:57:18.363208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.230 ms 00:22:51.491 [2024-11-20 17:57:18.363219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.398135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.398170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:22:51.491 [2024-11-20 17:57:18.398188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.924 ms 00:22:51.491 [2024-11-20 17:57:18.398199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.398898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.398927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:51.491 [2024-11-20 17:57:18.398942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:22:51.491 [2024-11-20 17:57:18.398955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.503189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.503238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:22:51.491 [2024-11-20 17:57:18.503261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.342 ms 00:22:51.491 [2024-11-20 17:57:18.503272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.540679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.540720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:22:51.491 [2024-11-20 17:57:18.540737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.381 ms 00:22:51.491 [2024-11-20 17:57:18.540749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.577088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.577141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:22:51.491 [2024-11-20 17:57:18.577159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.343 ms 00:22:51.491 [2024-11-20 17:57:18.577170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.612986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.613024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:51.491 [2024-11-20 17:57:18.613055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.829 ms 00:22:51.491 [2024-11-20 17:57:18.613066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.613113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.613125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:51.491 [2024-11-20 17:57:18.613141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:51.491 [2024-11-20 17:57:18.613152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.613268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:51.491 [2024-11-20 17:57:18.613282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:51.491 [2024-11-20 17:57:18.613298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:22:51.491 [2024-11-20 17:57:18.613309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:51.491 [2024-11-20 17:57:18.614325] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4352.294 ms, result 0 00:22:51.491 { 00:22:51.491 "name": "ftl0", 00:22:51.491 "uuid": "8837d17e-478f-4076-a8da-ed3abf6761e2" 00:22:51.491 } 00:22:51.491 17:57:18 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:22:51.491 17:57:18 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:22:51.750 17:57:18 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:22:51.750 17:57:18 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:22:52.008 [2024-11-20 17:57:19.033032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.033107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:52.008 [2024-11-20 17:57:19.033123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:22:52.008 [2024-11-20 17:57:19.033145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.033173] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:52.008 [2024-11-20 17:57:19.037305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.037338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:52.008 [2024-11-20 17:57:19.037368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.116 ms 00:22:52.008 [2024-11-20 17:57:19.037379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.037628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.037662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:52.008 [2024-11-20 17:57:19.037676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:22:52.008 [2024-11-20 17:57:19.037687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.040202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.040225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:52.008 [2024-11-20 17:57:19.040239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.499 ms 00:22:52.008 [2024-11-20 17:57:19.040250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.045199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.045233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:52.008 [2024-11-20 17:57:19.045267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.933 ms 00:22:52.008 [2024-11-20 17:57:19.045278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.081212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.081252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:52.008 [2024-11-20 17:57:19.081285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.921 ms 00:22:52.008 [2024-11-20 17:57:19.081295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.103324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.103363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:52.008 [2024-11-20 17:57:19.103396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.016 ms 00:22:52.008 [2024-11-20 17:57:19.103406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.103550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.103566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:52.008 [2024-11-20 17:57:19.103581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:22:52.008 [2024-11-20 17:57:19.103591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.138897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.138935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:52.008 [2024-11-20 17:57:19.138966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.337 ms 00:22:52.008 [2024-11-20 17:57:19.138976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.008 [2024-11-20 17:57:19.173868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.008 [2024-11-20 17:57:19.173905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:52.008 [2024-11-20 17:57:19.173937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.904 ms 00:22:52.008 [2024-11-20 17:57:19.173947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.208616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.274 [2024-11-20 17:57:19.208652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:52.274 [2024-11-20 17:57:19.208683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.677 ms 00:22:52.274 [2024-11-20 17:57:19.208692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.243716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.274 [2024-11-20 17:57:19.243752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:52.274 [2024-11-20 17:57:19.243790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.967 ms 00:22:52.274 [2024-11-20 17:57:19.243800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.243843] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:52.274 [2024-11-20 17:57:19.243859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.243987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.244994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:52.274 [2024-11-20 17:57:19.245136] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:52.274 [2024-11-20 17:57:19.245152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8837d17e-478f-4076-a8da-ed3abf6761e2 00:22:52.274 [2024-11-20 17:57:19.245163] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:22:52.274 [2024-11-20 17:57:19.245178] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:52.274 [2024-11-20 17:57:19.245188] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:52.274 [2024-11-20 17:57:19.245205] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:52.274 [2024-11-20 17:57:19.245215] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:52.274 [2024-11-20 17:57:19.245228] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:52.274 [2024-11-20 17:57:19.245238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:52.274 [2024-11-20 17:57:19.245250] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:52.274 [2024-11-20 17:57:19.245259] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:52.274 [2024-11-20 17:57:19.245271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.274 [2024-11-20 17:57:19.245282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:52.274 [2024-11-20 17:57:19.245295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.433 ms 00:22:52.274 [2024-11-20 17:57:19.245305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.264725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.274 [2024-11-20 17:57:19.264758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:52.274 [2024-11-20 17:57:19.264789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.395 ms 00:22:52.274 [2024-11-20 17:57:19.264800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.265413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:52.274 [2024-11-20 17:57:19.265436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:52.274 [2024-11-20 17:57:19.265453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:22:52.274 [2024-11-20 17:57:19.265463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.330519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.274 [2024-11-20 17:57:19.330556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:52.274 [2024-11-20 17:57:19.330572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.274 [2024-11-20 17:57:19.330584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.330643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.274 [2024-11-20 17:57:19.330655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:52.274 [2024-11-20 17:57:19.330672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.274 [2024-11-20 17:57:19.330683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.274 [2024-11-20 17:57:19.330788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.275 [2024-11-20 17:57:19.330803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:52.275 [2024-11-20 17:57:19.330816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.275 [2024-11-20 17:57:19.330827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.275 [2024-11-20 17:57:19.330853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.275 [2024-11-20 17:57:19.330864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:52.275 [2024-11-20 17:57:19.330878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.275 [2024-11-20 17:57:19.330887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.454300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.454351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:52.532 [2024-11-20 17:57:19.454370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.454381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.554572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.554633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:52.532 [2024-11-20 17:57:19.554652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.554666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.554808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.554823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:52.532 [2024-11-20 17:57:19.554837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.554847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.554917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.554930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:52.532 [2024-11-20 17:57:19.554943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.554953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.555076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.555092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:52.532 [2024-11-20 17:57:19.555106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.555116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.555160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.555173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:52.532 [2024-11-20 17:57:19.555187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.555198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.555244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.555256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:52.532 [2024-11-20 17:57:19.555269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.555279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.555330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:52.532 [2024-11-20 17:57:19.555356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:52.532 [2024-11-20 17:57:19.555369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:52.532 [2024-11-20 17:57:19.555380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:52.532 [2024-11-20 17:57:19.555519] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 523.295 ms, result 0 00:22:52.532 true 00:22:52.532 17:57:19 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79038 00:22:52.532 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79038 ']' 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79038 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79038 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.533 killing process with pid 79038 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79038' 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79038 00:22:52.533 17:57:19 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79038 00:22:55.816 17:57:22 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:00.003 262144+0 records in 00:23:00.003 262144+0 records out 00:23:00.003 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.12311 s, 260 MB/s 00:23:00.003 17:57:26 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:01.379 17:57:28 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:01.379 [2024-11-20 17:57:28.265313] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:23:01.379 [2024-11-20 17:57:28.265443] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79284 ] 00:23:01.379 [2024-11-20 17:57:28.450753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.638 [2024-11-20 17:57:28.564095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.897 [2024-11-20 17:57:28.928616] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:01.897 [2024-11-20 17:57:28.928691] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:02.158 [2024-11-20 17:57:29.092672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.092733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:02.158 [2024-11-20 17:57:29.092753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.158 [2024-11-20 17:57:29.092764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.092831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.092843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:02.158 [2024-11-20 17:57:29.092858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:23:02.158 [2024-11-20 17:57:29.092868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.092888] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:02.158 [2024-11-20 17:57:29.093838] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:02.158 [2024-11-20 17:57:29.093868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.093880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:02.158 [2024-11-20 17:57:29.093891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.986 ms 00:23:02.158 [2024-11-20 17:57:29.093901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.095359] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:02.158 [2024-11-20 17:57:29.114043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.114085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:02.158 [2024-11-20 17:57:29.114099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.714 ms 00:23:02.158 [2024-11-20 17:57:29.114110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.114177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.114190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:02.158 [2024-11-20 17:57:29.114201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:23:02.158 [2024-11-20 17:57:29.114211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.120943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.120977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:02.158 [2024-11-20 17:57:29.120990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.667 ms 00:23:02.158 [2024-11-20 17:57:29.121005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.121086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.121100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:02.158 [2024-11-20 17:57:29.121112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:02.158 [2024-11-20 17:57:29.121121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.121162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.121175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:02.158 [2024-11-20 17:57:29.121185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.158 [2024-11-20 17:57:29.121195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.121224] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:02.158 [2024-11-20 17:57:29.125967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.126003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:02.158 [2024-11-20 17:57:29.126015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.760 ms 00:23:02.158 [2024-11-20 17:57:29.126028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.126059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.158 [2024-11-20 17:57:29.126070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:02.158 [2024-11-20 17:57:29.126080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:02.158 [2024-11-20 17:57:29.126091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.158 [2024-11-20 17:57:29.126143] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:02.158 [2024-11-20 17:57:29.126166] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:02.158 [2024-11-20 17:57:29.126202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:02.158 [2024-11-20 17:57:29.126223] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:02.158 [2024-11-20 17:57:29.126310] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:02.158 [2024-11-20 17:57:29.126323] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:02.158 [2024-11-20 17:57:29.126336] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:02.159 [2024-11-20 17:57:29.126349] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126361] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:02.159 [2024-11-20 17:57:29.126382] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:02.159 [2024-11-20 17:57:29.126392] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:02.159 [2024-11-20 17:57:29.126405] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:02.159 [2024-11-20 17:57:29.126415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.159 [2024-11-20 17:57:29.126425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:02.159 [2024-11-20 17:57:29.126435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:23:02.159 [2024-11-20 17:57:29.126445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.159 [2024-11-20 17:57:29.126516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.159 [2024-11-20 17:57:29.126533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:02.159 [2024-11-20 17:57:29.126543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:23:02.159 [2024-11-20 17:57:29.126553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.159 [2024-11-20 17:57:29.126651] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:02.159 [2024-11-20 17:57:29.126666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:02.159 [2024-11-20 17:57:29.126676] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126686] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:02.159 [2024-11-20 17:57:29.126712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:02.159 [2024-11-20 17:57:29.126740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.159 [2024-11-20 17:57:29.126759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:02.159 [2024-11-20 17:57:29.126784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:02.159 [2024-11-20 17:57:29.126794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:02.159 [2024-11-20 17:57:29.126803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:02.159 [2024-11-20 17:57:29.126813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:02.159 [2024-11-20 17:57:29.126831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126840] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:02.159 [2024-11-20 17:57:29.126849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:02.159 [2024-11-20 17:57:29.126878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126897] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:02.159 [2024-11-20 17:57:29.126907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126925] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:02.159 [2024-11-20 17:57:29.126934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:02.159 [2024-11-20 17:57:29.126961] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:02.159 [2024-11-20 17:57:29.126978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:02.159 [2024-11-20 17:57:29.126987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:02.159 [2024-11-20 17:57:29.126996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.159 [2024-11-20 17:57:29.127005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:02.159 [2024-11-20 17:57:29.127014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:02.159 [2024-11-20 17:57:29.127023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:02.159 [2024-11-20 17:57:29.127032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:02.159 [2024-11-20 17:57:29.127041] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:02.159 [2024-11-20 17:57:29.127049] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.127059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:02.159 [2024-11-20 17:57:29.127068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:02.159 [2024-11-20 17:57:29.127078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.127087] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:02.159 [2024-11-20 17:57:29.127097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:02.159 [2024-11-20 17:57:29.127107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:02.159 [2024-11-20 17:57:29.127116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:02.159 [2024-11-20 17:57:29.127126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:02.159 [2024-11-20 17:57:29.127135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:02.159 [2024-11-20 17:57:29.127144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:02.159 [2024-11-20 17:57:29.127153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:02.159 [2024-11-20 17:57:29.127162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:02.159 [2024-11-20 17:57:29.127171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:02.159 [2024-11-20 17:57:29.127181] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:02.159 [2024-11-20 17:57:29.127193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:02.159 [2024-11-20 17:57:29.127215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:02.159 [2024-11-20 17:57:29.127225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:02.159 [2024-11-20 17:57:29.127235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:02.159 [2024-11-20 17:57:29.127245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:02.159 [2024-11-20 17:57:29.127255] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:02.159 [2024-11-20 17:57:29.127266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:02.159 [2024-11-20 17:57:29.127276] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:02.159 [2024-11-20 17:57:29.127286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:02.159 [2024-11-20 17:57:29.127296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127306] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:02.159 [2024-11-20 17:57:29.127347] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:02.159 [2024-11-20 17:57:29.127362] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127372] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:02.159 [2024-11-20 17:57:29.127383] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:02.159 [2024-11-20 17:57:29.127393] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:02.159 [2024-11-20 17:57:29.127404] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:02.159 [2024-11-20 17:57:29.127415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.159 [2024-11-20 17:57:29.127426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:02.159 [2024-11-20 17:57:29.127436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:23:02.159 [2024-11-20 17:57:29.127446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.159 [2024-11-20 17:57:29.164020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.159 [2024-11-20 17:57:29.164064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:02.159 [2024-11-20 17:57:29.164078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.587 ms 00:23:02.159 [2024-11-20 17:57:29.164089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.164174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.164185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:02.160 [2024-11-20 17:57:29.164198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:02.160 [2024-11-20 17:57:29.164208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.220418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.220461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:02.160 [2024-11-20 17:57:29.220474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.237 ms 00:23:02.160 [2024-11-20 17:57:29.220485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.220528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.220539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:02.160 [2024-11-20 17:57:29.220553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:02.160 [2024-11-20 17:57:29.220564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.221053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.221075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:02.160 [2024-11-20 17:57:29.221086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:23:02.160 [2024-11-20 17:57:29.221096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.221214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.221228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:02.160 [2024-11-20 17:57:29.221238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:23:02.160 [2024-11-20 17:57:29.221254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.239105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.239144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:02.160 [2024-11-20 17:57:29.239162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.859 ms 00:23:02.160 [2024-11-20 17:57:29.239172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.257997] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:02.160 [2024-11-20 17:57:29.258041] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:02.160 [2024-11-20 17:57:29.258056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.258067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:02.160 [2024-11-20 17:57:29.258078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.803 ms 00:23:02.160 [2024-11-20 17:57:29.258088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.288986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.289048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:02.160 [2024-11-20 17:57:29.289062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.904 ms 00:23:02.160 [2024-11-20 17:57:29.289073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.307579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.307627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:02.160 [2024-11-20 17:57:29.307640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.486 ms 00:23:02.160 [2024-11-20 17:57:29.307650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.325202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.325243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:02.160 [2024-11-20 17:57:29.325256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.542 ms 00:23:02.160 [2024-11-20 17:57:29.325266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.160 [2024-11-20 17:57:29.326088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.160 [2024-11-20 17:57:29.326120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:02.160 [2024-11-20 17:57:29.326133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:23:02.160 [2024-11-20 17:57:29.326143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.416471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.416542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:02.419 [2024-11-20 17:57:29.416560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.446 ms 00:23:02.419 [2024-11-20 17:57:29.416579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.428576] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:02.419 [2024-11-20 17:57:29.431805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.431837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:02.419 [2024-11-20 17:57:29.431852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.177 ms 00:23:02.419 [2024-11-20 17:57:29.431862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.431966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.431980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:02.419 [2024-11-20 17:57:29.431991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:02.419 [2024-11-20 17:57:29.432000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.432096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.432109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:02.419 [2024-11-20 17:57:29.432120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:02.419 [2024-11-20 17:57:29.432129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.432153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.432164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:02.419 [2024-11-20 17:57:29.432174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:02.419 [2024-11-20 17:57:29.432183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.432215] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:02.419 [2024-11-20 17:57:29.432228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.432241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:02.419 [2024-11-20 17:57:29.432251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:02.419 [2024-11-20 17:57:29.432261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.470495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.470550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:02.419 [2024-11-20 17:57:29.470565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.273 ms 00:23:02.419 [2024-11-20 17:57:29.470576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.470667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:02.419 [2024-11-20 17:57:29.470680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:02.419 [2024-11-20 17:57:29.470691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:02.419 [2024-11-20 17:57:29.470701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:02.419 [2024-11-20 17:57:29.471735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 379.250 ms, result 0 00:23:03.357  [2024-11-20T17:57:31.925Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T17:57:32.510Z] Copying: 48/1024 [MB] (24 MBps) [2024-11-20T17:57:33.887Z] Copying: 73/1024 [MB] (24 MBps) [2024-11-20T17:57:34.825Z] Copying: 97/1024 [MB] (24 MBps) [2024-11-20T17:57:35.762Z] Copying: 121/1024 [MB] (24 MBps) [2024-11-20T17:57:36.698Z] Copying: 148/1024 [MB] (27 MBps) [2024-11-20T17:57:37.635Z] Copying: 173/1024 [MB] (24 MBps) [2024-11-20T17:57:38.573Z] Copying: 198/1024 [MB] (24 MBps) [2024-11-20T17:57:39.509Z] Copying: 223/1024 [MB] (24 MBps) [2024-11-20T17:57:40.887Z] Copying: 247/1024 [MB] (24 MBps) [2024-11-20T17:57:41.824Z] Copying: 271/1024 [MB] (24 MBps) [2024-11-20T17:57:42.761Z] Copying: 294/1024 [MB] (23 MBps) [2024-11-20T17:57:43.709Z] Copying: 317/1024 [MB] (23 MBps) [2024-11-20T17:57:44.646Z] Copying: 341/1024 [MB] (23 MBps) [2024-11-20T17:57:45.583Z] Copying: 364/1024 [MB] (23 MBps) [2024-11-20T17:57:46.520Z] Copying: 388/1024 [MB] (23 MBps) [2024-11-20T17:57:47.457Z] Copying: 412/1024 [MB] (24 MBps) [2024-11-20T17:57:48.834Z] Copying: 437/1024 [MB] (24 MBps) [2024-11-20T17:57:49.769Z] Copying: 462/1024 [MB] (25 MBps) [2024-11-20T17:57:50.707Z] Copying: 486/1024 [MB] (24 MBps) [2024-11-20T17:57:51.644Z] Copying: 511/1024 [MB] (24 MBps) [2024-11-20T17:57:52.584Z] Copying: 535/1024 [MB] (23 MBps) [2024-11-20T17:57:53.522Z] Copying: 559/1024 [MB] (23 MBps) [2024-11-20T17:57:54.459Z] Copying: 582/1024 [MB] (23 MBps) [2024-11-20T17:57:55.875Z] Copying: 606/1024 [MB] (23 MBps) [2024-11-20T17:57:56.446Z] Copying: 630/1024 [MB] (23 MBps) [2024-11-20T17:57:57.824Z] Copying: 653/1024 [MB] (23 MBps) [2024-11-20T17:57:58.761Z] Copying: 678/1024 [MB] (24 MBps) [2024-11-20T17:57:59.698Z] Copying: 702/1024 [MB] (24 MBps) [2024-11-20T17:58:00.633Z] Copying: 726/1024 [MB] (23 MBps) [2024-11-20T17:58:01.569Z] Copying: 750/1024 [MB] (23 MBps) [2024-11-20T17:58:02.504Z] Copying: 773/1024 [MB] (23 MBps) [2024-11-20T17:58:03.440Z] Copying: 795/1024 [MB] (22 MBps) [2024-11-20T17:58:04.818Z] Copying: 818/1024 [MB] (22 MBps) [2024-11-20T17:58:05.753Z] Copying: 840/1024 [MB] (22 MBps) [2024-11-20T17:58:06.689Z] Copying: 863/1024 [MB] (22 MBps) [2024-11-20T17:58:07.627Z] Copying: 886/1024 [MB] (23 MBps) [2024-11-20T17:58:08.597Z] Copying: 909/1024 [MB] (23 MBps) [2024-11-20T17:58:09.533Z] Copying: 933/1024 [MB] (23 MBps) [2024-11-20T17:58:10.470Z] Copying: 957/1024 [MB] (24 MBps) [2024-11-20T17:58:11.848Z] Copying: 981/1024 [MB] (23 MBps) [2024-11-20T17:58:12.416Z] Copying: 1005/1024 [MB] (23 MBps) [2024-11-20T17:58:12.416Z] Copying: 1024/1024 [MB] (average 23 MBps)[2024-11-20 17:58:12.201430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.201496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:45.240 [2024-11-20 17:58:12.201525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:45.240 [2024-11-20 17:58:12.201536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.201574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:45.240 [2024-11-20 17:58:12.206378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.206417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:45.240 [2024-11-20 17:58:12.206431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:23:45.240 [2024-11-20 17:58:12.206450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.208427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.208468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:45.240 [2024-11-20 17:58:12.208481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.951 ms 00:23:45.240 [2024-11-20 17:58:12.208492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.225837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.225878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:45.240 [2024-11-20 17:58:12.225891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.355 ms 00:23:45.240 [2024-11-20 17:58:12.225902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.230866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.230897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:45.240 [2024-11-20 17:58:12.230910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.929 ms 00:23:45.240 [2024-11-20 17:58:12.230921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.268925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.269122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:45.240 [2024-11-20 17:58:12.269143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.003 ms 00:23:45.240 [2024-11-20 17:58:12.269154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.291375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.240 [2024-11-20 17:58:12.291413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:45.240 [2024-11-20 17:58:12.291428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.138 ms 00:23:45.240 [2024-11-20 17:58:12.291439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.240 [2024-11-20 17:58:12.291581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.241 [2024-11-20 17:58:12.291598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:45.241 [2024-11-20 17:58:12.291616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:23:45.241 [2024-11-20 17:58:12.291627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.241 [2024-11-20 17:58:12.328359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.241 [2024-11-20 17:58:12.328395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:45.241 [2024-11-20 17:58:12.328409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.775 ms 00:23:45.241 [2024-11-20 17:58:12.328419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.241 [2024-11-20 17:58:12.363804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.241 [2024-11-20 17:58:12.363840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:45.241 [2024-11-20 17:58:12.363868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.404 ms 00:23:45.241 [2024-11-20 17:58:12.363878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.241 [2024-11-20 17:58:12.398813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.241 [2024-11-20 17:58:12.398976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:45.241 [2024-11-20 17:58:12.398997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.953 ms 00:23:45.241 [2024-11-20 17:58:12.399007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.501 [2024-11-20 17:58:12.434077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.501 [2024-11-20 17:58:12.434231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:45.501 [2024-11-20 17:58:12.434251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.993 ms 00:23:45.501 [2024-11-20 17:58:12.434263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.501 [2024-11-20 17:58:12.434297] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:45.501 [2024-11-20 17:58:12.434314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:45.501 [2024-11-20 17:58:12.434956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.434967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.434977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.434988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.434998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:45.502 [2024-11-20 17:58:12.435441] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:45.502 [2024-11-20 17:58:12.435461] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8837d17e-478f-4076-a8da-ed3abf6761e2 00:23:45.502 [2024-11-20 17:58:12.435476] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:45.502 [2024-11-20 17:58:12.435486] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:45.502 [2024-11-20 17:58:12.435495] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:45.502 [2024-11-20 17:58:12.435506] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:45.502 [2024-11-20 17:58:12.435516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:45.502 [2024-11-20 17:58:12.435527] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:45.502 [2024-11-20 17:58:12.435537] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:45.502 [2024-11-20 17:58:12.435556] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:45.502 [2024-11-20 17:58:12.435566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:45.502 [2024-11-20 17:58:12.435577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.502 [2024-11-20 17:58:12.435587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:45.502 [2024-11-20 17:58:12.435598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.282 ms 00:23:45.502 [2024-11-20 17:58:12.435608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.456317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.502 [2024-11-20 17:58:12.456351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:45.502 [2024-11-20 17:58:12.456363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.690 ms 00:23:45.502 [2024-11-20 17:58:12.456374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.457017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:45.502 [2024-11-20 17:58:12.457038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:45.502 [2024-11-20 17:58:12.457049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:23:45.502 [2024-11-20 17:58:12.457060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.511163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.502 [2024-11-20 17:58:12.511197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:45.502 [2024-11-20 17:58:12.511211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.502 [2024-11-20 17:58:12.511222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.511290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.502 [2024-11-20 17:58:12.511302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:45.502 [2024-11-20 17:58:12.511312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.502 [2024-11-20 17:58:12.511323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.511418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.502 [2024-11-20 17:58:12.511432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:45.502 [2024-11-20 17:58:12.511443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.502 [2024-11-20 17:58:12.511454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.511472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.502 [2024-11-20 17:58:12.511483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:45.502 [2024-11-20 17:58:12.511493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.502 [2024-11-20 17:58:12.511503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.502 [2024-11-20 17:58:12.645215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.502 [2024-11-20 17:58:12.645284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:45.502 [2024-11-20 17:58:12.645303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.502 [2024-11-20 17:58:12.645314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.761 [2024-11-20 17:58:12.751209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.751440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:45.762 [2024-11-20 17:58:12.751467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.751479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.751634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.751649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:45.762 [2024-11-20 17:58:12.751661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.751671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.751720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.751738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:45.762 [2024-11-20 17:58:12.751749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.751759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.751908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.751929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:45.762 [2024-11-20 17:58:12.751941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.751951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.752015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.752030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:45.762 [2024-11-20 17:58:12.752041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.752051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.752100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.752118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:45.762 [2024-11-20 17:58:12.752128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.752140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.752206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:45.762 [2024-11-20 17:58:12.752218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:45.762 [2024-11-20 17:58:12.752229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:45.762 [2024-11-20 17:58:12.752240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:45.762 [2024-11-20 17:58:12.752420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.832 ms, result 0 00:23:47.140 00:23:47.140 00:23:47.140 17:58:14 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:23:47.140 [2024-11-20 17:58:14.119992] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:23:47.140 [2024-11-20 17:58:14.120115] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79742 ] 00:23:47.140 [2024-11-20 17:58:14.300468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:47.399 [2024-11-20 17:58:14.440061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:47.969 [2024-11-20 17:58:14.859614] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:47.969 [2024-11-20 17:58:14.859923] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:47.969 [2024-11-20 17:58:15.025459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.025517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:47.969 [2024-11-20 17:58:15.025539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:47.969 [2024-11-20 17:58:15.025557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.025623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.025636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:47.969 [2024-11-20 17:58:15.025651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:47.969 [2024-11-20 17:58:15.025662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.025683] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:47.969 [2024-11-20 17:58:15.026614] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:47.969 [2024-11-20 17:58:15.026645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.026656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:47.969 [2024-11-20 17:58:15.026668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.967 ms 00:23:47.969 [2024-11-20 17:58:15.026679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.029068] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:47.969 [2024-11-20 17:58:15.049672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.049732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:47.969 [2024-11-20 17:58:15.049749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.639 ms 00:23:47.969 [2024-11-20 17:58:15.049759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.049848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.049862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:47.969 [2024-11-20 17:58:15.049874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:23:47.969 [2024-11-20 17:58:15.049886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.061797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.061827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:47.969 [2024-11-20 17:58:15.061842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.860 ms 00:23:47.969 [2024-11-20 17:58:15.061858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.061950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.061964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:47.969 [2024-11-20 17:58:15.061976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:23:47.969 [2024-11-20 17:58:15.061987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.062044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.062057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:47.969 [2024-11-20 17:58:15.062068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:47.969 [2024-11-20 17:58:15.062078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.062111] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:47.969 [2024-11-20 17:58:15.067865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.068084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:47.969 [2024-11-20 17:58:15.068107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.776 ms 00:23:47.969 [2024-11-20 17:58:15.068124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.068161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.969 [2024-11-20 17:58:15.068173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:47.969 [2024-11-20 17:58:15.068185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:47.969 [2024-11-20 17:58:15.068195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.969 [2024-11-20 17:58:15.068236] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:47.969 [2024-11-20 17:58:15.068264] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:47.969 [2024-11-20 17:58:15.068301] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:47.970 [2024-11-20 17:58:15.068325] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:47.970 [2024-11-20 17:58:15.068419] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:47.970 [2024-11-20 17:58:15.068434] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:47.970 [2024-11-20 17:58:15.068447] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:47.970 [2024-11-20 17:58:15.068461] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:47.970 [2024-11-20 17:58:15.068474] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:47.970 [2024-11-20 17:58:15.068486] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:47.970 [2024-11-20 17:58:15.068497] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:47.970 [2024-11-20 17:58:15.068509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:47.970 [2024-11-20 17:58:15.068523] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:47.970 [2024-11-20 17:58:15.068535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.970 [2024-11-20 17:58:15.068547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:47.970 [2024-11-20 17:58:15.068558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:23:47.970 [2024-11-20 17:58:15.068568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.970 [2024-11-20 17:58:15.068640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.970 [2024-11-20 17:58:15.068651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:47.970 [2024-11-20 17:58:15.068663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:47.970 [2024-11-20 17:58:15.068673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.970 [2024-11-20 17:58:15.068796] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:47.970 [2024-11-20 17:58:15.068813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:47.970 [2024-11-20 17:58:15.068825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.970 [2024-11-20 17:58:15.068835] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.068847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:47.970 [2024-11-20 17:58:15.068857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.068867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:47.970 [2024-11-20 17:58:15.068877] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:47.970 [2024-11-20 17:58:15.068888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:47.970 [2024-11-20 17:58:15.068898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.970 [2024-11-20 17:58:15.068910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:47.970 [2024-11-20 17:58:15.068920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:47.970 [2024-11-20 17:58:15.068930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:47.970 [2024-11-20 17:58:15.068939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:47.970 [2024-11-20 17:58:15.068950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:47.970 [2024-11-20 17:58:15.068971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.068981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:47.970 [2024-11-20 17:58:15.068991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:47.970 [2024-11-20 17:58:15.069020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069030] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:47.970 [2024-11-20 17:58:15.069051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:47.970 [2024-11-20 17:58:15.069079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:47.970 [2024-11-20 17:58:15.069107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:47.970 [2024-11-20 17:58:15.069134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.970 [2024-11-20 17:58:15.069152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:47.970 [2024-11-20 17:58:15.069161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:47.970 [2024-11-20 17:58:15.069170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:47.970 [2024-11-20 17:58:15.069180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:47.970 [2024-11-20 17:58:15.069189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:47.970 [2024-11-20 17:58:15.069198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:47.970 [2024-11-20 17:58:15.069216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:47.970 [2024-11-20 17:58:15.069227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069237] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:47.970 [2024-11-20 17:58:15.069248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:47.970 [2024-11-20 17:58:15.069258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:47.970 [2024-11-20 17:58:15.069278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:47.970 [2024-11-20 17:58:15.069288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:47.970 [2024-11-20 17:58:15.069298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:47.970 [2024-11-20 17:58:15.069308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:47.970 [2024-11-20 17:58:15.069317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:47.970 [2024-11-20 17:58:15.069327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:47.970 [2024-11-20 17:58:15.069337] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:47.970 [2024-11-20 17:58:15.069350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:47.970 [2024-11-20 17:58:15.069372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:47.970 [2024-11-20 17:58:15.069383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:47.970 [2024-11-20 17:58:15.069396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:47.970 [2024-11-20 17:58:15.069407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:47.970 [2024-11-20 17:58:15.069417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:47.970 [2024-11-20 17:58:15.069427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:47.970 [2024-11-20 17:58:15.069438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:47.970 [2024-11-20 17:58:15.069448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:47.970 [2024-11-20 17:58:15.069459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:47.970 [2024-11-20 17:58:15.069512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:47.970 [2024-11-20 17:58:15.069528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069540] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:47.970 [2024-11-20 17:58:15.069560] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:47.970 [2024-11-20 17:58:15.069571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:47.970 [2024-11-20 17:58:15.069584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:47.970 [2024-11-20 17:58:15.069596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.970 [2024-11-20 17:58:15.069607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:47.970 [2024-11-20 17:58:15.069618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:23:47.970 [2024-11-20 17:58:15.069629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.970 [2024-11-20 17:58:15.114499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.970 [2024-11-20 17:58:15.114545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:47.971 [2024-11-20 17:58:15.114562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.888 ms 00:23:47.971 [2024-11-20 17:58:15.114574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:47.971 [2024-11-20 17:58:15.114682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:47.971 [2024-11-20 17:58:15.114695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:47.971 [2024-11-20 17:58:15.114707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:47.971 [2024-11-20 17:58:15.114718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.174786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.174833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:48.230 [2024-11-20 17:58:15.174849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.072 ms 00:23:48.230 [2024-11-20 17:58:15.174861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.174921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.174934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:48.230 [2024-11-20 17:58:15.174951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:48.230 [2024-11-20 17:58:15.174962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.175810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.175832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:48.230 [2024-11-20 17:58:15.175845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.751 ms 00:23:48.230 [2024-11-20 17:58:15.175856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.175996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.176011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:48.230 [2024-11-20 17:58:15.176023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:23:48.230 [2024-11-20 17:58:15.176041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.197185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.197226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:48.230 [2024-11-20 17:58:15.197245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.154 ms 00:23:48.230 [2024-11-20 17:58:15.197257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.217393] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:48.230 [2024-11-20 17:58:15.217608] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:48.230 [2024-11-20 17:58:15.217632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.217645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:48.230 [2024-11-20 17:58:15.217658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.270 ms 00:23:48.230 [2024-11-20 17:58:15.217669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.248945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.230 [2024-11-20 17:58:15.248994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:48.230 [2024-11-20 17:58:15.249009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.257 ms 00:23:48.230 [2024-11-20 17:58:15.249021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.230 [2024-11-20 17:58:15.267658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.267696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:48.231 [2024-11-20 17:58:15.267710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.344 ms 00:23:48.231 [2024-11-20 17:58:15.267720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.285839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.285874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:48.231 [2024-11-20 17:58:15.285888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.107 ms 00:23:48.231 [2024-11-20 17:58:15.285898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.286721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.286747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:48.231 [2024-11-20 17:58:15.286760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.700 ms 00:23:48.231 [2024-11-20 17:58:15.286792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.383410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.383681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:48.231 [2024-11-20 17:58:15.383716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.748 ms 00:23:48.231 [2024-11-20 17:58:15.383729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.394619] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:48.231 [2024-11-20 17:58:15.398654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.398686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:48.231 [2024-11-20 17:58:15.398702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.870 ms 00:23:48.231 [2024-11-20 17:58:15.398714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.398820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.398835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:48.231 [2024-11-20 17:58:15.398848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:48.231 [2024-11-20 17:58:15.398865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.398979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.398993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:48.231 [2024-11-20 17:58:15.399005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:23:48.231 [2024-11-20 17:58:15.399016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.399044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.399056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:48.231 [2024-11-20 17:58:15.399067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:48.231 [2024-11-20 17:58:15.399077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.231 [2024-11-20 17:58:15.399122] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:48.231 [2024-11-20 17:58:15.399135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.231 [2024-11-20 17:58:15.399147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:48.231 [2024-11-20 17:58:15.399158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:48.231 [2024-11-20 17:58:15.399169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.490 [2024-11-20 17:58:15.436438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.490 [2024-11-20 17:58:15.436595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:48.490 [2024-11-20 17:58:15.436710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.308 ms 00:23:48.490 [2024-11-20 17:58:15.436759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.490 [2024-11-20 17:58:15.436884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:48.490 [2024-11-20 17:58:15.436983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:48.490 [2024-11-20 17:58:15.437020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:23:48.490 [2024-11-20 17:58:15.437051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:48.490 [2024-11-20 17:58:15.438640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 413.341 ms, result 0 00:23:49.868  [2024-11-20T17:58:17.981Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T17:58:18.918Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-20T17:58:19.855Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-20T17:58:20.797Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-20T17:58:21.734Z] Copying: 124/1024 [MB] (25 MBps) [2024-11-20T17:58:22.673Z] Copying: 150/1024 [MB] (25 MBps) [2024-11-20T17:58:24.050Z] Copying: 175/1024 [MB] (25 MBps) [2024-11-20T17:58:24.987Z] Copying: 201/1024 [MB] (25 MBps) [2024-11-20T17:58:25.924Z] Copying: 226/1024 [MB] (25 MBps) [2024-11-20T17:58:26.860Z] Copying: 252/1024 [MB] (26 MBps) [2024-11-20T17:58:27.796Z] Copying: 278/1024 [MB] (25 MBps) [2024-11-20T17:58:28.733Z] Copying: 304/1024 [MB] (25 MBps) [2024-11-20T17:58:29.672Z] Copying: 328/1024 [MB] (24 MBps) [2024-11-20T17:58:31.050Z] Copying: 353/1024 [MB] (24 MBps) [2024-11-20T17:58:31.988Z] Copying: 379/1024 [MB] (25 MBps) [2024-11-20T17:58:32.926Z] Copying: 404/1024 [MB] (25 MBps) [2024-11-20T17:58:33.864Z] Copying: 430/1024 [MB] (25 MBps) [2024-11-20T17:58:34.804Z] Copying: 456/1024 [MB] (25 MBps) [2024-11-20T17:58:35.740Z] Copying: 481/1024 [MB] (25 MBps) [2024-11-20T17:58:36.678Z] Copying: 507/1024 [MB] (26 MBps) [2024-11-20T17:58:38.056Z] Copying: 534/1024 [MB] (26 MBps) [2024-11-20T17:58:38.993Z] Copying: 561/1024 [MB] (27 MBps) [2024-11-20T17:58:39.930Z] Copying: 588/1024 [MB] (27 MBps) [2024-11-20T17:58:40.868Z] Copying: 615/1024 [MB] (26 MBps) [2024-11-20T17:58:41.805Z] Copying: 641/1024 [MB] (26 MBps) [2024-11-20T17:58:42.742Z] Copying: 668/1024 [MB] (26 MBps) [2024-11-20T17:58:43.678Z] Copying: 694/1024 [MB] (26 MBps) [2024-11-20T17:58:45.056Z] Copying: 721/1024 [MB] (26 MBps) [2024-11-20T17:58:45.624Z] Copying: 747/1024 [MB] (26 MBps) [2024-11-20T17:58:47.014Z] Copying: 773/1024 [MB] (25 MBps) [2024-11-20T17:58:47.951Z] Copying: 800/1024 [MB] (26 MBps) [2024-11-20T17:58:48.888Z] Copying: 826/1024 [MB] (26 MBps) [2024-11-20T17:58:49.824Z] Copying: 852/1024 [MB] (25 MBps) [2024-11-20T17:58:50.760Z] Copying: 877/1024 [MB] (25 MBps) [2024-11-20T17:58:51.697Z] Copying: 902/1024 [MB] (25 MBps) [2024-11-20T17:58:52.634Z] Copying: 928/1024 [MB] (25 MBps) [2024-11-20T17:58:54.013Z] Copying: 953/1024 [MB] (25 MBps) [2024-11-20T17:58:54.949Z] Copying: 978/1024 [MB] (25 MBps) [2024-11-20T17:58:55.517Z] Copying: 1004/1024 [MB] (26 MBps) [2024-11-20T17:58:56.456Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 17:58:56.224813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.224890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:29.280 [2024-11-20 17:58:56.224907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:29.280 [2024-11-20 17:58:56.224919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.280 [2024-11-20 17:58:56.224943] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:29.280 [2024-11-20 17:58:56.229748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.229794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:29.280 [2024-11-20 17:58:56.229815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.794 ms 00:24:29.280 [2024-11-20 17:58:56.229826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.280 [2024-11-20 17:58:56.230033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.230046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:29.280 [2024-11-20 17:58:56.230056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:24:29.280 [2024-11-20 17:58:56.230066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.280 [2024-11-20 17:58:56.232945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.232968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:29.280 [2024-11-20 17:58:56.232981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.867 ms 00:24:29.280 [2024-11-20 17:58:56.232991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.280 [2024-11-20 17:58:56.238656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.238825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:29.280 [2024-11-20 17:58:56.238849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.645 ms 00:24:29.280 [2024-11-20 17:58:56.238860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.280 [2024-11-20 17:58:56.275870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.280 [2024-11-20 17:58:56.275924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:29.280 [2024-11-20 17:58:56.275939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.989 ms 00:24:29.280 [2024-11-20 17:58:56.275966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.297319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.297471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:29.281 [2024-11-20 17:58:56.297493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.345 ms 00:24:29.281 [2024-11-20 17:58:56.297504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.297671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.297694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:29.281 [2024-11-20 17:58:56.297705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:24:29.281 [2024-11-20 17:58:56.297715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.335138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.335190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:29.281 [2024-11-20 17:58:56.335204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.467 ms 00:24:29.281 [2024-11-20 17:58:56.335214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.371574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.371730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:29.281 [2024-11-20 17:58:56.371750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.380 ms 00:24:29.281 [2024-11-20 17:58:56.371761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.407084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.407121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:29.281 [2024-11-20 17:58:56.407134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.333 ms 00:24:29.281 [2024-11-20 17:58:56.407144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.442667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.281 [2024-11-20 17:58:56.442705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:29.281 [2024-11-20 17:58:56.442718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.504 ms 00:24:29.281 [2024-11-20 17:58:56.442728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.281 [2024-11-20 17:58:56.442763] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:29.281 [2024-11-20 17:58:56.442787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.442997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:29.281 [2024-11-20 17:58:56.443492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:29.282 [2024-11-20 17:58:56.443864] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:29.282 [2024-11-20 17:58:56.443878] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8837d17e-478f-4076-a8da-ed3abf6761e2 00:24:29.282 [2024-11-20 17:58:56.443889] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:29.282 [2024-11-20 17:58:56.443899] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:29.282 [2024-11-20 17:58:56.443908] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:29.282 [2024-11-20 17:58:56.443919] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:29.282 [2024-11-20 17:58:56.443928] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:29.282 [2024-11-20 17:58:56.443938] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:29.282 [2024-11-20 17:58:56.443958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:29.282 [2024-11-20 17:58:56.443967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:29.282 [2024-11-20 17:58:56.443977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:29.282 [2024-11-20 17:58:56.443987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.282 [2024-11-20 17:58:56.443997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:29.282 [2024-11-20 17:58:56.444007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.226 ms 00:24:29.282 [2024-11-20 17:58:56.444017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.464072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.546 [2024-11-20 17:58:56.464107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:29.546 [2024-11-20 17:58:56.464120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.033 ms 00:24:29.546 [2024-11-20 17:58:56.464146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.464698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:29.546 [2024-11-20 17:58:56.464713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:29.546 [2024-11-20 17:58:56.464724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:24:29.546 [2024-11-20 17:58:56.464740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.516933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.546 [2024-11-20 17:58:56.517081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:29.546 [2024-11-20 17:58:56.517103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.546 [2024-11-20 17:58:56.517113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.517175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.546 [2024-11-20 17:58:56.517186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:29.546 [2024-11-20 17:58:56.517197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.546 [2024-11-20 17:58:56.517213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.517285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.546 [2024-11-20 17:58:56.517298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:29.546 [2024-11-20 17:58:56.517309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.546 [2024-11-20 17:58:56.517318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.517335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.546 [2024-11-20 17:58:56.517346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:29.546 [2024-11-20 17:58:56.517356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.546 [2024-11-20 17:58:56.517366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.546 [2024-11-20 17:58:56.641377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.546 [2024-11-20 17:58:56.641591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:29.546 [2024-11-20 17:58:56.641614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.546 [2024-11-20 17:58:56.641625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:29.806 [2024-11-20 17:58:56.742293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:29.806 [2024-11-20 17:58:56.742426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:29.806 [2024-11-20 17:58:56.742500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:29.806 [2024-11-20 17:58:56.742651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:29.806 [2024-11-20 17:58:56.742719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.806 [2024-11-20 17:58:56.742795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.806 [2024-11-20 17:58:56.742807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:29.806 [2024-11-20 17:58:56.742818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.806 [2024-11-20 17:58:56.742828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.807 [2024-11-20 17:58:56.742870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:29.807 [2024-11-20 17:58:56.742881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:29.807 [2024-11-20 17:58:56.742891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:29.807 [2024-11-20 17:58:56.742902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:29.807 [2024-11-20 17:58:56.743019] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 519.017 ms, result 0 00:24:30.744 00:24:30.744 00:24:30.744 17:58:57 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:24:32.685 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:24:32.685 17:58:59 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:24:32.685 [2024-11-20 17:58:59.623929] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:24:32.685 [2024-11-20 17:58:59.624208] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80200 ] 00:24:32.685 [2024-11-20 17:58:59.804645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.944 [2024-11-20 17:58:59.920113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:33.202 [2024-11-20 17:59:00.282428] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.202 [2024-11-20 17:59:00.282739] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:33.461 [2024-11-20 17:59:00.442930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.461 [2024-11-20 17:59:00.442978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:33.461 [2024-11-20 17:59:00.442997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:33.461 [2024-11-20 17:59:00.443024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.461 [2024-11-20 17:59:00.443071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.461 [2024-11-20 17:59:00.443083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:33.461 [2024-11-20 17:59:00.443098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:33.461 [2024-11-20 17:59:00.443107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.443128] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:33.462 [2024-11-20 17:59:00.444172] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:33.462 [2024-11-20 17:59:00.444194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.444205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:33.462 [2024-11-20 17:59:00.444216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:24:33.462 [2024-11-20 17:59:00.444226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.445613] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:33.462 [2024-11-20 17:59:00.464414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.464453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:33.462 [2024-11-20 17:59:00.464467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.832 ms 00:24:33.462 [2024-11-20 17:59:00.464493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.464557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.464570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:33.462 [2024-11-20 17:59:00.464580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:24:33.462 [2024-11-20 17:59:00.464590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.471309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.471338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:33.462 [2024-11-20 17:59:00.471349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:24:33.462 [2024-11-20 17:59:00.471363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.471437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.471450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:33.462 [2024-11-20 17:59:00.471460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:33.462 [2024-11-20 17:59:00.471470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.471509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.471521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:33.462 [2024-11-20 17:59:00.471532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:33.462 [2024-11-20 17:59:00.471541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.471567] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:33.462 [2024-11-20 17:59:00.476344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.476373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:33.462 [2024-11-20 17:59:00.476385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.793 ms 00:24:33.462 [2024-11-20 17:59:00.476414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.476443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.476454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:33.462 [2024-11-20 17:59:00.476465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:24:33.462 [2024-11-20 17:59:00.476474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.476526] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:33.462 [2024-11-20 17:59:00.476550] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:33.462 [2024-11-20 17:59:00.476585] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:33.462 [2024-11-20 17:59:00.476607] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:33.462 [2024-11-20 17:59:00.476696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:33.462 [2024-11-20 17:59:00.476709] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:33.462 [2024-11-20 17:59:00.476722] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:33.462 [2024-11-20 17:59:00.476735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:33.462 [2024-11-20 17:59:00.476747] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:33.462 [2024-11-20 17:59:00.476758] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:33.462 [2024-11-20 17:59:00.476768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:33.462 [2024-11-20 17:59:00.476777] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:33.462 [2024-11-20 17:59:00.476809] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:33.462 [2024-11-20 17:59:00.476820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.476830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:33.462 [2024-11-20 17:59:00.476841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.297 ms 00:24:33.462 [2024-11-20 17:59:00.476851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.476922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.462 [2024-11-20 17:59:00.476933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:33.462 [2024-11-20 17:59:00.476943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:33.462 [2024-11-20 17:59:00.476953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.462 [2024-11-20 17:59:00.477052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:33.462 [2024-11-20 17:59:00.477066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:33.462 [2024-11-20 17:59:00.477077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:33.462 [2024-11-20 17:59:00.477107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477127] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:33.462 [2024-11-20 17:59:00.477137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.462 [2024-11-20 17:59:00.477156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:33.462 [2024-11-20 17:59:00.477165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:33.462 [2024-11-20 17:59:00.477174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:33.462 [2024-11-20 17:59:00.477183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:33.462 [2024-11-20 17:59:00.477192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:33.462 [2024-11-20 17:59:00.477210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:33.462 [2024-11-20 17:59:00.477229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:33.462 [2024-11-20 17:59:00.477256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:33.462 [2024-11-20 17:59:00.477283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:33.462 [2024-11-20 17:59:00.477310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:33.462 [2024-11-20 17:59:00.477337] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:33.462 [2024-11-20 17:59:00.477364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.462 [2024-11-20 17:59:00.477381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:33.462 [2024-11-20 17:59:00.477390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:33.462 [2024-11-20 17:59:00.477399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:33.462 [2024-11-20 17:59:00.477408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:33.462 [2024-11-20 17:59:00.477417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:33.462 [2024-11-20 17:59:00.477425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:33.462 [2024-11-20 17:59:00.477443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:33.462 [2024-11-20 17:59:00.477452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477461] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:33.462 [2024-11-20 17:59:00.477471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:33.462 [2024-11-20 17:59:00.477480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:33.462 [2024-11-20 17:59:00.477489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:33.462 [2024-11-20 17:59:00.477499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:33.463 [2024-11-20 17:59:00.477508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:33.463 [2024-11-20 17:59:00.477526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:33.463 [2024-11-20 17:59:00.477536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:33.463 [2024-11-20 17:59:00.477545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:33.463 [2024-11-20 17:59:00.477554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:33.463 [2024-11-20 17:59:00.477565] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:33.463 [2024-11-20 17:59:00.477577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:33.463 [2024-11-20 17:59:00.477598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:33.463 [2024-11-20 17:59:00.477608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:33.463 [2024-11-20 17:59:00.477618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:33.463 [2024-11-20 17:59:00.477627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:33.463 [2024-11-20 17:59:00.477637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:33.463 [2024-11-20 17:59:00.477648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:33.463 [2024-11-20 17:59:00.477658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:33.463 [2024-11-20 17:59:00.477668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:33.463 [2024-11-20 17:59:00.477678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:33.463 [2024-11-20 17:59:00.477727] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:33.463 [2024-11-20 17:59:00.477742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:33.463 [2024-11-20 17:59:00.477763] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:33.463 [2024-11-20 17:59:00.477785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:33.463 [2024-11-20 17:59:00.477796] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:33.463 [2024-11-20 17:59:00.477807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.477818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:33.463 [2024-11-20 17:59:00.477828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:24:33.463 [2024-11-20 17:59:00.477837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.518088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.518291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:33.463 [2024-11-20 17:59:00.518314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.272 ms 00:24:33.463 [2024-11-20 17:59:00.518326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.518416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.518429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:33.463 [2024-11-20 17:59:00.518440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:24:33.463 [2024-11-20 17:59:00.518451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.577097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.577127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:33.463 [2024-11-20 17:59:00.577140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.682 ms 00:24:33.463 [2024-11-20 17:59:00.577151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.577188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.577199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:33.463 [2024-11-20 17:59:00.577214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:33.463 [2024-11-20 17:59:00.577224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.577708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.577722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:33.463 [2024-11-20 17:59:00.577733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:24:33.463 [2024-11-20 17:59:00.577743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.577880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.577912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:33.463 [2024-11-20 17:59:00.577923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:24:33.463 [2024-11-20 17:59:00.577939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.596899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.597052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:33.463 [2024-11-20 17:59:00.597079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.971 ms 00:24:33.463 [2024-11-20 17:59:00.597090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.463 [2024-11-20 17:59:00.617109] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:33.463 [2024-11-20 17:59:00.617144] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:33.463 [2024-11-20 17:59:00.617159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.463 [2024-11-20 17:59:00.617171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:33.463 [2024-11-20 17:59:00.617183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.997 ms 00:24:33.463 [2024-11-20 17:59:00.617192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.647254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.647286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:33.723 [2024-11-20 17:59:00.647299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.071 ms 00:24:33.723 [2024-11-20 17:59:00.647310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.665103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.665135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:33.723 [2024-11-20 17:59:00.665147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.781 ms 00:24:33.723 [2024-11-20 17:59:00.665172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.683150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.683183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:33.723 [2024-11-20 17:59:00.683196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.968 ms 00:24:33.723 [2024-11-20 17:59:00.683205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.683954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.683977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:33.723 [2024-11-20 17:59:00.683989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.644 ms 00:24:33.723 [2024-11-20 17:59:00.684002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.769913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.769974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:33.723 [2024-11-20 17:59:00.769995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.029 ms 00:24:33.723 [2024-11-20 17:59:00.770006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.780923] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:33.723 [2024-11-20 17:59:00.783845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.783875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:33.723 [2024-11-20 17:59:00.783889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.812 ms 00:24:33.723 [2024-11-20 17:59:00.783900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.783988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.784001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:33.723 [2024-11-20 17:59:00.784013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:33.723 [2024-11-20 17:59:00.784026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.784114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.784126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:33.723 [2024-11-20 17:59:00.784137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:24:33.723 [2024-11-20 17:59:00.784146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.784171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.784182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:33.723 [2024-11-20 17:59:00.784191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:33.723 [2024-11-20 17:59:00.784201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.784236] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:33.723 [2024-11-20 17:59:00.784248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.784258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:33.723 [2024-11-20 17:59:00.784268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:33.723 [2024-11-20 17:59:00.784277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.821310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.821348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:33.723 [2024-11-20 17:59:00.821363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.069 ms 00:24:33.723 [2024-11-20 17:59:00.821379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.821456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:33.723 [2024-11-20 17:59:00.821469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:33.723 [2024-11-20 17:59:00.821480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:24:33.723 [2024-11-20 17:59:00.821490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:33.723 [2024-11-20 17:59:00.822749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 380.005 ms, result 0 00:24:34.659  [2024-11-20T17:59:03.209Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-20T17:59:04.142Z] Copying: 49/1024 [MB] (24 MBps) [2024-11-20T17:59:05.148Z] Copying: 74/1024 [MB] (24 MBps) [2024-11-20T17:59:06.084Z] Copying: 98/1024 [MB] (24 MBps) [2024-11-20T17:59:07.021Z] Copying: 123/1024 [MB] (24 MBps) [2024-11-20T17:59:07.957Z] Copying: 148/1024 [MB] (25 MBps) [2024-11-20T17:59:08.894Z] Copying: 172/1024 [MB] (24 MBps) [2024-11-20T17:59:09.828Z] Copying: 196/1024 [MB] (24 MBps) [2024-11-20T17:59:11.237Z] Copying: 221/1024 [MB] (24 MBps) [2024-11-20T17:59:12.174Z] Copying: 246/1024 [MB] (25 MBps) [2024-11-20T17:59:13.110Z] Copying: 271/1024 [MB] (24 MBps) [2024-11-20T17:59:14.047Z] Copying: 296/1024 [MB] (24 MBps) [2024-11-20T17:59:14.984Z] Copying: 320/1024 [MB] (24 MBps) [2024-11-20T17:59:15.921Z] Copying: 345/1024 [MB] (24 MBps) [2024-11-20T17:59:16.859Z] Copying: 370/1024 [MB] (25 MBps) [2024-11-20T17:59:18.238Z] Copying: 395/1024 [MB] (24 MBps) [2024-11-20T17:59:19.175Z] Copying: 420/1024 [MB] (24 MBps) [2024-11-20T17:59:20.112Z] Copying: 444/1024 [MB] (24 MBps) [2024-11-20T17:59:21.048Z] Copying: 469/1024 [MB] (25 MBps) [2024-11-20T17:59:21.985Z] Copying: 494/1024 [MB] (24 MBps) [2024-11-20T17:59:22.933Z] Copying: 519/1024 [MB] (24 MBps) [2024-11-20T17:59:23.876Z] Copying: 543/1024 [MB] (24 MBps) [2024-11-20T17:59:24.814Z] Copying: 569/1024 [MB] (25 MBps) [2024-11-20T17:59:26.234Z] Copying: 593/1024 [MB] (24 MBps) [2024-11-20T17:59:26.801Z] Copying: 618/1024 [MB] (25 MBps) [2024-11-20T17:59:28.180Z] Copying: 643/1024 [MB] (24 MBps) [2024-11-20T17:59:29.117Z] Copying: 668/1024 [MB] (25 MBps) [2024-11-20T17:59:30.054Z] Copying: 693/1024 [MB] (25 MBps) [2024-11-20T17:59:30.992Z] Copying: 719/1024 [MB] (25 MBps) [2024-11-20T17:59:31.929Z] Copying: 744/1024 [MB] (25 MBps) [2024-11-20T17:59:32.866Z] Copying: 769/1024 [MB] (24 MBps) [2024-11-20T17:59:33.804Z] Copying: 794/1024 [MB] (25 MBps) [2024-11-20T17:59:35.183Z] Copying: 819/1024 [MB] (25 MBps) [2024-11-20T17:59:36.120Z] Copying: 844/1024 [MB] (24 MBps) [2024-11-20T17:59:37.073Z] Copying: 870/1024 [MB] (25 MBps) [2024-11-20T17:59:38.011Z] Copying: 895/1024 [MB] (24 MBps) [2024-11-20T17:59:38.948Z] Copying: 920/1024 [MB] (25 MBps) [2024-11-20T17:59:39.885Z] Copying: 945/1024 [MB] (24 MBps) [2024-11-20T17:59:40.824Z] Copying: 969/1024 [MB] (24 MBps) [2024-11-20T17:59:42.202Z] Copying: 994/1024 [MB] (24 MBps) [2024-11-20T17:59:42.771Z] Copying: 1018/1024 [MB] (24 MBps) [2024-11-20T17:59:42.771Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-11-20 17:59:42.733655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.595 [2024-11-20 17:59:42.733718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:15.595 [2024-11-20 17:59:42.733734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:15.595 [2024-11-20 17:59:42.733752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.595 [2024-11-20 17:59:42.736661] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:15.595 [2024-11-20 17:59:42.741567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.595 [2024-11-20 17:59:42.741712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:15.595 [2024-11-20 17:59:42.741806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.742 ms 00:25:15.595 [2024-11-20 17:59:42.741843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.595 [2024-11-20 17:59:42.751100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.595 [2024-11-20 17:59:42.751243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:15.595 [2024-11-20 17:59:42.751322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.753 ms 00:25:15.595 [2024-11-20 17:59:42.751365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.775050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.775222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:15.855 [2024-11-20 17:59:42.775308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.680 ms 00:25:15.855 [2024-11-20 17:59:42.775347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.780400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.780533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:15.855 [2024-11-20 17:59:42.780553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.003 ms 00:25:15.855 [2024-11-20 17:59:42.780565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.817056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.817096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:15.855 [2024-11-20 17:59:42.817110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.497 ms 00:25:15.855 [2024-11-20 17:59:42.817121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.838432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.838478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:15.855 [2024-11-20 17:59:42.838493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.306 ms 00:25:15.855 [2024-11-20 17:59:42.838504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.959256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.959302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:15.855 [2024-11-20 17:59:42.959317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 120.900 ms 00:25:15.855 [2024-11-20 17:59:42.959328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:15.855 [2024-11-20 17:59:42.996527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:15.855 [2024-11-20 17:59:42.996572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:15.855 [2024-11-20 17:59:42.996587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.239 ms 00:25:15.855 [2024-11-20 17:59:42.996598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.117 [2024-11-20 17:59:43.032742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.117 [2024-11-20 17:59:43.032797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:16.117 [2024-11-20 17:59:43.032811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.162 ms 00:25:16.117 [2024-11-20 17:59:43.032821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.117 [2024-11-20 17:59:43.068425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.117 [2024-11-20 17:59:43.068465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:16.117 [2024-11-20 17:59:43.068478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.623 ms 00:25:16.117 [2024-11-20 17:59:43.068488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.117 [2024-11-20 17:59:43.104177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.117 [2024-11-20 17:59:43.104219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:16.117 [2024-11-20 17:59:43.104233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.664 ms 00:25:16.117 [2024-11-20 17:59:43.104243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.117 [2024-11-20 17:59:43.104283] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:16.117 [2024-11-20 17:59:43.104302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 111104 / 261120 wr_cnt: 1 state: open 00:25:16.117 [2024-11-20 17:59:43.104315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.104994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:16.117 [2024-11-20 17:59:43.105110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:16.118 [2024-11-20 17:59:43.105376] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:16.118 [2024-11-20 17:59:43.105386] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8837d17e-478f-4076-a8da-ed3abf6761e2 00:25:16.118 [2024-11-20 17:59:43.105397] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 111104 00:25:16.118 [2024-11-20 17:59:43.105407] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 112064 00:25:16.118 [2024-11-20 17:59:43.105417] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 111104 00:25:16.118 [2024-11-20 17:59:43.105427] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:25:16.118 [2024-11-20 17:59:43.105437] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:16.118 [2024-11-20 17:59:43.105452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:16.118 [2024-11-20 17:59:43.105472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:16.118 [2024-11-20 17:59:43.105481] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:16.118 [2024-11-20 17:59:43.105497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:16.118 [2024-11-20 17:59:43.105507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.118 [2024-11-20 17:59:43.105517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:16.118 [2024-11-20 17:59:43.105527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.227 ms 00:25:16.118 [2024-11-20 17:59:43.105537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.125533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.118 [2024-11-20 17:59:43.125569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:16.118 [2024-11-20 17:59:43.125582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.990 ms 00:25:16.118 [2024-11-20 17:59:43.125598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.126357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.118 [2024-11-20 17:59:43.126373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:16.118 [2024-11-20 17:59:43.126383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.735 ms 00:25:16.118 [2024-11-20 17:59:43.126394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.179083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.118 [2024-11-20 17:59:43.179131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:16.118 [2024-11-20 17:59:43.179144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.118 [2024-11-20 17:59:43.179155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.179213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.118 [2024-11-20 17:59:43.179223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:16.118 [2024-11-20 17:59:43.179233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.118 [2024-11-20 17:59:43.179243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.179331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.118 [2024-11-20 17:59:43.179345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:16.118 [2024-11-20 17:59:43.179375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.118 [2024-11-20 17:59:43.179385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.118 [2024-11-20 17:59:43.179403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.118 [2024-11-20 17:59:43.179413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:16.118 [2024-11-20 17:59:43.179423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.118 [2024-11-20 17:59:43.179433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.300759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.300819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:16.378 [2024-11-20 17:59:43.300854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.300864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:16.378 [2024-11-20 17:59:43.398136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.378 [2024-11-20 17:59:43.398253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.378 [2024-11-20 17:59:43.398325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.378 [2024-11-20 17:59:43.398450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:16.378 [2024-11-20 17:59:43.398520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.378 [2024-11-20 17:59:43.398585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:16.378 [2024-11-20 17:59:43.398650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.378 [2024-11-20 17:59:43.398661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:16.378 [2024-11-20 17:59:43.398670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.378 [2024-11-20 17:59:43.398806] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 668.184 ms, result 0 00:25:17.758 00:25:17.758 00:25:17.758 17:59:44 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:25:18.017 [2024-11-20 17:59:44.966137] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:25:18.017 [2024-11-20 17:59:44.966473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80657 ] 00:25:18.017 [2024-11-20 17:59:45.148479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:18.276 [2024-11-20 17:59:45.265387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:18.535 [2024-11-20 17:59:45.620468] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:18.535 [2024-11-20 17:59:45.620538] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:18.796 [2024-11-20 17:59:45.780713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.780763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:18.796 [2024-11-20 17:59:45.780795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:18.796 [2024-11-20 17:59:45.780805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.780850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.780862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:18.796 [2024-11-20 17:59:45.780875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:18.796 [2024-11-20 17:59:45.780885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.780906] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:18.796 [2024-11-20 17:59:45.781830] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:18.796 [2024-11-20 17:59:45.781860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.781871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:18.796 [2024-11-20 17:59:45.781882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:25:18.796 [2024-11-20 17:59:45.781892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.783316] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:18.796 [2024-11-20 17:59:45.802064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.802106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:18.796 [2024-11-20 17:59:45.802121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.779 ms 00:25:18.796 [2024-11-20 17:59:45.802131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.802193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.802206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:18.796 [2024-11-20 17:59:45.802216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:25:18.796 [2024-11-20 17:59:45.802226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.808913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.808941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:18.796 [2024-11-20 17:59:45.808954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.628 ms 00:25:18.796 [2024-11-20 17:59:45.808968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.809043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.809057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:18.796 [2024-11-20 17:59:45.809068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:18.796 [2024-11-20 17:59:45.809077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.809117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.809129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:18.796 [2024-11-20 17:59:45.809139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:18.796 [2024-11-20 17:59:45.809149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.809175] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:18.796 [2024-11-20 17:59:45.813881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.813914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:18.796 [2024-11-20 17:59:45.813926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.722 ms 00:25:18.796 [2024-11-20 17:59:45.813955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.796 [2024-11-20 17:59:45.813985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.796 [2024-11-20 17:59:45.813995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:18.797 [2024-11-20 17:59:45.814006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:18.797 [2024-11-20 17:59:45.814015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.797 [2024-11-20 17:59:45.814068] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:18.797 [2024-11-20 17:59:45.814091] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:18.797 [2024-11-20 17:59:45.814125] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:18.797 [2024-11-20 17:59:45.814146] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:18.797 [2024-11-20 17:59:45.814233] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:18.797 [2024-11-20 17:59:45.814246] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:18.797 [2024-11-20 17:59:45.814259] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:18.797 [2024-11-20 17:59:45.814272] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814284] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814295] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:18.797 [2024-11-20 17:59:45.814305] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:18.797 [2024-11-20 17:59:45.814315] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:18.797 [2024-11-20 17:59:45.814328] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:18.797 [2024-11-20 17:59:45.814339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.797 [2024-11-20 17:59:45.814349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:18.797 [2024-11-20 17:59:45.814360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:25:18.797 [2024-11-20 17:59:45.814369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.797 [2024-11-20 17:59:45.814440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.797 [2024-11-20 17:59:45.814451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:18.797 [2024-11-20 17:59:45.814460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:18.797 [2024-11-20 17:59:45.814470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.797 [2024-11-20 17:59:45.814565] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:18.797 [2024-11-20 17:59:45.814584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:18.797 [2024-11-20 17:59:45.814595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:18.797 [2024-11-20 17:59:45.814625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:18.797 [2024-11-20 17:59:45.814653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.797 [2024-11-20 17:59:45.814671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:18.797 [2024-11-20 17:59:45.814681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:18.797 [2024-11-20 17:59:45.814690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:18.797 [2024-11-20 17:59:45.814699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:18.797 [2024-11-20 17:59:45.814708] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:18.797 [2024-11-20 17:59:45.814727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:18.797 [2024-11-20 17:59:45.814745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814763] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:18.797 [2024-11-20 17:59:45.814787] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:18.797 [2024-11-20 17:59:45.814815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:18.797 [2024-11-20 17:59:45.814841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:18.797 [2024-11-20 17:59:45.814869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:18.797 [2024-11-20 17:59:45.814887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:18.797 [2024-11-20 17:59:45.814896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.797 [2024-11-20 17:59:45.814914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:18.797 [2024-11-20 17:59:45.814923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:18.797 [2024-11-20 17:59:45.814932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:18.797 [2024-11-20 17:59:45.814941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:18.797 [2024-11-20 17:59:45.814950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:18.797 [2024-11-20 17:59:45.814959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:18.797 [2024-11-20 17:59:45.814976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:18.797 [2024-11-20 17:59:45.814985] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.814996] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:18.797 [2024-11-20 17:59:45.815006] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:18.797 [2024-11-20 17:59:45.815016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:18.797 [2024-11-20 17:59:45.815025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:18.797 [2024-11-20 17:59:45.815036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:18.797 [2024-11-20 17:59:45.815045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:18.797 [2024-11-20 17:59:45.815054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:18.797 [2024-11-20 17:59:45.815063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:18.797 [2024-11-20 17:59:45.815072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:18.797 [2024-11-20 17:59:45.815081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:18.797 [2024-11-20 17:59:45.815092] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:18.797 [2024-11-20 17:59:45.815104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:18.797 [2024-11-20 17:59:45.815126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:18.797 [2024-11-20 17:59:45.815136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:18.797 [2024-11-20 17:59:45.815146] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:18.797 [2024-11-20 17:59:45.815157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:18.797 [2024-11-20 17:59:45.815166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:18.797 [2024-11-20 17:59:45.815177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:18.797 [2024-11-20 17:59:45.815187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:18.797 [2024-11-20 17:59:45.815197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:18.797 [2024-11-20 17:59:45.815211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815231] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:18.797 [2024-11-20 17:59:45.815261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:18.797 [2024-11-20 17:59:45.815275] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:18.797 [2024-11-20 17:59:45.815297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:18.797 [2024-11-20 17:59:45.815307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:18.798 [2024-11-20 17:59:45.815317] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:18.798 [2024-11-20 17:59:45.815328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.815338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:18.798 [2024-11-20 17:59:45.815348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.817 ms 00:25:18.798 [2024-11-20 17:59:45.815358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.853431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.853469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:18.798 [2024-11-20 17:59:45.853483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.090 ms 00:25:18.798 [2024-11-20 17:59:45.853499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.853595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.853606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:18.798 [2024-11-20 17:59:45.853617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:25:18.798 [2024-11-20 17:59:45.853628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.906013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.906049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:18.798 [2024-11-20 17:59:45.906062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.416 ms 00:25:18.798 [2024-11-20 17:59:45.906088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.906123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.906135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:18.798 [2024-11-20 17:59:45.906150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:18.798 [2024-11-20 17:59:45.906160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.906631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.906653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:18.798 [2024-11-20 17:59:45.906664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:25:18.798 [2024-11-20 17:59:45.906673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.906797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.906811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:18.798 [2024-11-20 17:59:45.906821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:25:18.798 [2024-11-20 17:59:45.906837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.925082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.925115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:18.798 [2024-11-20 17:59:45.925155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.254 ms 00:25:18.798 [2024-11-20 17:59:45.925165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.798 [2024-11-20 17:59:45.943182] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:18.798 [2024-11-20 17:59:45.943223] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:18.798 [2024-11-20 17:59:45.943238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.798 [2024-11-20 17:59:45.943265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:18.798 [2024-11-20 17:59:45.943278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.006 ms 00:25:18.798 [2024-11-20 17:59:45.943288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:45.972535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:45.972575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:19.058 [2024-11-20 17:59:45.972604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.253 ms 00:25:19.058 [2024-11-20 17:59:45.972614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:45.990332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:45.990381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:19.058 [2024-11-20 17:59:45.990394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.699 ms 00:25:19.058 [2024-11-20 17:59:45.990404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.008158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.008193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:19.058 [2024-11-20 17:59:46.008220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.747 ms 00:25:19.058 [2024-11-20 17:59:46.008230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.008993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.009024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:19.058 [2024-11-20 17:59:46.009036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:25:19.058 [2024-11-20 17:59:46.009049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.094711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.094765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:19.058 [2024-11-20 17:59:46.094794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.780 ms 00:25:19.058 [2024-11-20 17:59:46.094822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.105249] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:19.058 [2024-11-20 17:59:46.107724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.107752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:19.058 [2024-11-20 17:59:46.107790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.878 ms 00:25:19.058 [2024-11-20 17:59:46.107801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.107881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.107894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:19.058 [2024-11-20 17:59:46.107905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:19.058 [2024-11-20 17:59:46.107919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.109404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.109442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:19.058 [2024-11-20 17:59:46.109454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.446 ms 00:25:19.058 [2024-11-20 17:59:46.109464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.109499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.109510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:19.058 [2024-11-20 17:59:46.109521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:19.058 [2024-11-20 17:59:46.109531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.109594] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:19.058 [2024-11-20 17:59:46.109606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.109616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:19.058 [2024-11-20 17:59:46.109626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:19.058 [2024-11-20 17:59:46.109636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.146768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.146813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:19.058 [2024-11-20 17:59:46.146827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.169 ms 00:25:19.058 [2024-11-20 17:59:46.146844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.146932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:19.058 [2024-11-20 17:59:46.146944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:19.058 [2024-11-20 17:59:46.146955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:19.058 [2024-11-20 17:59:46.146965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:19.058 [2024-11-20 17:59:46.148035] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.477 ms, result 0 00:25:20.461  [2024-11-20T17:59:48.576Z] Copying: 23/1024 [MB] (23 MBps) [2024-11-20T17:59:49.512Z] Copying: 50/1024 [MB] (26 MBps) [2024-11-20T17:59:50.450Z] Copying: 77/1024 [MB] (27 MBps) [2024-11-20T17:59:51.388Z] Copying: 104/1024 [MB] (26 MBps) [2024-11-20T17:59:52.767Z] Copying: 130/1024 [MB] (26 MBps) [2024-11-20T17:59:53.705Z] Copying: 157/1024 [MB] (26 MBps) [2024-11-20T17:59:54.643Z] Copying: 184/1024 [MB] (26 MBps) [2024-11-20T17:59:55.660Z] Copying: 211/1024 [MB] (26 MBps) [2024-11-20T17:59:56.598Z] Copying: 237/1024 [MB] (26 MBps) [2024-11-20T17:59:57.535Z] Copying: 263/1024 [MB] (25 MBps) [2024-11-20T17:59:58.474Z] Copying: 289/1024 [MB] (26 MBps) [2024-11-20T17:59:59.410Z] Copying: 314/1024 [MB] (24 MBps) [2024-11-20T18:00:00.790Z] Copying: 340/1024 [MB] (26 MBps) [2024-11-20T18:00:01.356Z] Copying: 365/1024 [MB] (24 MBps) [2024-11-20T18:00:02.735Z] Copying: 391/1024 [MB] (26 MBps) [2024-11-20T18:00:03.672Z] Copying: 418/1024 [MB] (27 MBps) [2024-11-20T18:00:04.610Z] Copying: 446/1024 [MB] (27 MBps) [2024-11-20T18:00:05.548Z] Copying: 473/1024 [MB] (27 MBps) [2024-11-20T18:00:06.485Z] Copying: 500/1024 [MB] (27 MBps) [2024-11-20T18:00:07.422Z] Copying: 527/1024 [MB] (26 MBps) [2024-11-20T18:00:08.360Z] Copying: 553/1024 [MB] (26 MBps) [2024-11-20T18:00:09.795Z] Copying: 580/1024 [MB] (26 MBps) [2024-11-20T18:00:10.363Z] Copying: 606/1024 [MB] (26 MBps) [2024-11-20T18:00:11.740Z] Copying: 633/1024 [MB] (26 MBps) [2024-11-20T18:00:12.678Z] Copying: 659/1024 [MB] (26 MBps) [2024-11-20T18:00:13.615Z] Copying: 684/1024 [MB] (25 MBps) [2024-11-20T18:00:14.552Z] Copying: 710/1024 [MB] (25 MBps) [2024-11-20T18:00:15.525Z] Copying: 736/1024 [MB] (25 MBps) [2024-11-20T18:00:16.462Z] Copying: 761/1024 [MB] (25 MBps) [2024-11-20T18:00:17.398Z] Copying: 787/1024 [MB] (25 MBps) [2024-11-20T18:00:18.334Z] Copying: 813/1024 [MB] (25 MBps) [2024-11-20T18:00:19.711Z] Copying: 841/1024 [MB] (27 MBps) [2024-11-20T18:00:20.647Z] Copying: 868/1024 [MB] (27 MBps) [2024-11-20T18:00:21.581Z] Copying: 894/1024 [MB] (25 MBps) [2024-11-20T18:00:22.515Z] Copying: 920/1024 [MB] (25 MBps) [2024-11-20T18:00:23.450Z] Copying: 945/1024 [MB] (25 MBps) [2024-11-20T18:00:24.386Z] Copying: 972/1024 [MB] (27 MBps) [2024-11-20T18:00:25.321Z] Copying: 998/1024 [MB] (25 MBps) [2024-11-20T18:00:26.256Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-20 18:00:26.042077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.042162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:59.080 [2024-11-20 18:00:26.042182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:59.080 [2024-11-20 18:00:26.042208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.042237] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:59.080 [2024-11-20 18:00:26.047264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.047308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:59.080 [2024-11-20 18:00:26.047321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.014 ms 00:25:59.080 [2024-11-20 18:00:26.047333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.047564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.047578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:59.080 [2024-11-20 18:00:26.047589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:25:59.080 [2024-11-20 18:00:26.047600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.051944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.051989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:59.080 [2024-11-20 18:00:26.052003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.326 ms 00:25:59.080 [2024-11-20 18:00:26.052015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.057502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.057541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:59.080 [2024-11-20 18:00:26.057555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.456 ms 00:25:59.080 [2024-11-20 18:00:26.057566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.095566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.095617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:59.080 [2024-11-20 18:00:26.095632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.976 ms 00:25:59.080 [2024-11-20 18:00:26.095643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.080 [2024-11-20 18:00:26.118612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.080 [2024-11-20 18:00:26.118655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:59.080 [2024-11-20 18:00:26.118669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.964 ms 00:25:59.080 [2024-11-20 18:00:26.118680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.279511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.340 [2024-11-20 18:00:26.279552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:59.340 [2024-11-20 18:00:26.279567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 161.047 ms 00:25:59.340 [2024-11-20 18:00:26.279579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.315964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.340 [2024-11-20 18:00:26.316001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:59.340 [2024-11-20 18:00:26.316015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.427 ms 00:25:59.340 [2024-11-20 18:00:26.316025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.352124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.340 [2024-11-20 18:00:26.352161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:59.340 [2024-11-20 18:00:26.352188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.120 ms 00:25:59.340 [2024-11-20 18:00:26.352199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.387261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.340 [2024-11-20 18:00:26.387295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:59.340 [2024-11-20 18:00:26.387308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.033 ms 00:25:59.340 [2024-11-20 18:00:26.387334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.422276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.340 [2024-11-20 18:00:26.422312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:59.340 [2024-11-20 18:00:26.422325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.922 ms 00:25:59.340 [2024-11-20 18:00:26.422335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.340 [2024-11-20 18:00:26.422372] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:59.340 [2024-11-20 18:00:26.422388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:25:59.340 [2024-11-20 18:00:26.422401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:59.340 [2024-11-20 18:00:26.422614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.422990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:59.341 [2024-11-20 18:00:26.423486] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:59.341 [2024-11-20 18:00:26.423496] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8837d17e-478f-4076-a8da-ed3abf6761e2 00:25:59.341 [2024-11-20 18:00:26.423507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:25:59.341 [2024-11-20 18:00:26.423517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 20928 00:25:59.341 [2024-11-20 18:00:26.423527] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 19968 00:25:59.341 [2024-11-20 18:00:26.423538] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0481 00:25:59.341 [2024-11-20 18:00:26.423548] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:59.341 [2024-11-20 18:00:26.423565] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:59.341 [2024-11-20 18:00:26.423575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:59.341 [2024-11-20 18:00:26.423595] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:59.341 [2024-11-20 18:00:26.423604] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:59.341 [2024-11-20 18:00:26.423613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.341 [2024-11-20 18:00:26.423624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:59.341 [2024-11-20 18:00:26.423635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:25:59.341 [2024-11-20 18:00:26.423645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.341 [2024-11-20 18:00:26.444177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.342 [2024-11-20 18:00:26.444211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:59.342 [2024-11-20 18:00:26.444225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.531 ms 00:25:59.342 [2024-11-20 18:00:26.444241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.342 [2024-11-20 18:00:26.444848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:59.342 [2024-11-20 18:00:26.444866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:59.342 [2024-11-20 18:00:26.444879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.587 ms 00:25:59.342 [2024-11-20 18:00:26.444890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.342 [2024-11-20 18:00:26.500016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.342 [2024-11-20 18:00:26.500063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:59.342 [2024-11-20 18:00:26.500079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.342 [2024-11-20 18:00:26.500090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.342 [2024-11-20 18:00:26.500162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.342 [2024-11-20 18:00:26.500174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:59.342 [2024-11-20 18:00:26.500184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.342 [2024-11-20 18:00:26.500196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.342 [2024-11-20 18:00:26.500313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.342 [2024-11-20 18:00:26.500329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:59.342 [2024-11-20 18:00:26.500345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.342 [2024-11-20 18:00:26.500355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.342 [2024-11-20 18:00:26.500374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.342 [2024-11-20 18:00:26.500385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:59.342 [2024-11-20 18:00:26.500395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.342 [2024-11-20 18:00:26.500406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.633797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.633868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:59.601 [2024-11-20 18:00:26.633893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.633904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:59.601 [2024-11-20 18:00:26.740218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:59.601 [2024-11-20 18:00:26.740377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:59.601 [2024-11-20 18:00:26.740464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:59.601 [2024-11-20 18:00:26.740629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:59.601 [2024-11-20 18:00:26.740705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:59.601 [2024-11-20 18:00:26.740807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.740871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:59.601 [2024-11-20 18:00:26.740884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:59.601 [2024-11-20 18:00:26.740895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:59.601 [2024-11-20 18:00:26.740906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:59.601 [2024-11-20 18:00:26.741051] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 700.063 ms, result 0 00:26:00.978 00:26:00.978 00:26:00.978 18:00:27 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:02.882 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79038 00:26:02.882 18:00:29 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79038 ']' 00:26:02.882 18:00:29 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79038 00:26:02.882 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79038) - No such process 00:26:02.882 Process with pid 79038 is not found 00:26:02.882 Remove shared memory files 00:26:02.882 18:00:29 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79038 is not found' 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:02.882 18:00:29 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:26:02.882 00:26:02.882 real 3m20.066s 00:26:02.882 user 3m6.691s 00:26:02.882 sys 0m13.970s 00:26:02.882 18:00:29 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:02.882 ************************************ 00:26:02.882 END TEST ftl_restore 00:26:02.882 ************************************ 00:26:02.882 18:00:29 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:26:02.882 18:00:29 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:02.882 18:00:29 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:26:02.882 18:00:29 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:02.882 18:00:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:02.882 ************************************ 00:26:02.882 START TEST ftl_dirty_shutdown 00:26:02.882 ************************************ 00:26:02.882 18:00:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:26:02.882 * Looking for test storage... 00:26:02.882 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:02.882 18:00:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:02.883 18:00:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:26:02.883 18:00:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.143 --rc genhtml_branch_coverage=1 00:26:03.143 --rc genhtml_function_coverage=1 00:26:03.143 --rc genhtml_legend=1 00:26:03.143 --rc geninfo_all_blocks=1 00:26:03.143 --rc geninfo_unexecuted_blocks=1 00:26:03.143 00:26:03.143 ' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.143 --rc genhtml_branch_coverage=1 00:26:03.143 --rc genhtml_function_coverage=1 00:26:03.143 --rc genhtml_legend=1 00:26:03.143 --rc geninfo_all_blocks=1 00:26:03.143 --rc geninfo_unexecuted_blocks=1 00:26:03.143 00:26:03.143 ' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.143 --rc genhtml_branch_coverage=1 00:26:03.143 --rc genhtml_function_coverage=1 00:26:03.143 --rc genhtml_legend=1 00:26:03.143 --rc geninfo_all_blocks=1 00:26:03.143 --rc geninfo_unexecuted_blocks=1 00:26:03.143 00:26:03.143 ' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:03.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:03.143 --rc genhtml_branch_coverage=1 00:26:03.143 --rc genhtml_function_coverage=1 00:26:03.143 --rc genhtml_legend=1 00:26:03.143 --rc geninfo_all_blocks=1 00:26:03.143 --rc geninfo_unexecuted_blocks=1 00:26:03.143 00:26:03.143 ' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:26:03.143 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81180 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81180 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81180 ']' 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:03.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:03.144 18:00:30 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:03.144 [2024-11-20 18:00:30.238088] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:26:03.144 [2024-11-20 18:00:30.238215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81180 ] 00:26:03.403 [2024-11-20 18:00:30.424586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.403 [2024-11-20 18:00:30.562038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:04.790 18:00:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:26:05.062 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:05.062 { 00:26:05.062 "name": "nvme0n1", 00:26:05.062 "aliases": [ 00:26:05.062 "0f29c876-02c9-4725-b7a3-74b09f5f4616" 00:26:05.062 ], 00:26:05.062 "product_name": "NVMe disk", 00:26:05.062 "block_size": 4096, 00:26:05.062 "num_blocks": 1310720, 00:26:05.062 "uuid": "0f29c876-02c9-4725-b7a3-74b09f5f4616", 00:26:05.062 "numa_id": -1, 00:26:05.062 "assigned_rate_limits": { 00:26:05.062 "rw_ios_per_sec": 0, 00:26:05.063 "rw_mbytes_per_sec": 0, 00:26:05.063 "r_mbytes_per_sec": 0, 00:26:05.063 "w_mbytes_per_sec": 0 00:26:05.063 }, 00:26:05.063 "claimed": true, 00:26:05.063 "claim_type": "read_many_write_one", 00:26:05.063 "zoned": false, 00:26:05.063 "supported_io_types": { 00:26:05.063 "read": true, 00:26:05.063 "write": true, 00:26:05.063 "unmap": true, 00:26:05.063 "flush": true, 00:26:05.063 "reset": true, 00:26:05.063 "nvme_admin": true, 00:26:05.063 "nvme_io": true, 00:26:05.063 "nvme_io_md": false, 00:26:05.063 "write_zeroes": true, 00:26:05.063 "zcopy": false, 00:26:05.063 "get_zone_info": false, 00:26:05.063 "zone_management": false, 00:26:05.063 "zone_append": false, 00:26:05.063 "compare": true, 00:26:05.063 "compare_and_write": false, 00:26:05.063 "abort": true, 00:26:05.063 "seek_hole": false, 00:26:05.063 "seek_data": false, 00:26:05.063 "copy": true, 00:26:05.063 "nvme_iov_md": false 00:26:05.063 }, 00:26:05.063 "driver_specific": { 00:26:05.063 "nvme": [ 00:26:05.063 { 00:26:05.063 "pci_address": "0000:00:11.0", 00:26:05.063 "trid": { 00:26:05.063 "trtype": "PCIe", 00:26:05.063 "traddr": "0000:00:11.0" 00:26:05.063 }, 00:26:05.063 "ctrlr_data": { 00:26:05.063 "cntlid": 0, 00:26:05.063 "vendor_id": "0x1b36", 00:26:05.063 "model_number": "QEMU NVMe Ctrl", 00:26:05.063 "serial_number": "12341", 00:26:05.063 "firmware_revision": "8.0.0", 00:26:05.063 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:05.063 "oacs": { 00:26:05.063 "security": 0, 00:26:05.063 "format": 1, 00:26:05.063 "firmware": 0, 00:26:05.063 "ns_manage": 1 00:26:05.063 }, 00:26:05.063 "multi_ctrlr": false, 00:26:05.063 "ana_reporting": false 00:26:05.063 }, 00:26:05.063 "vs": { 00:26:05.063 "nvme_version": "1.4" 00:26:05.063 }, 00:26:05.063 "ns_data": { 00:26:05.063 "id": 1, 00:26:05.063 "can_share": false 00:26:05.063 } 00:26:05.063 } 00:26:05.063 ], 00:26:05.063 "mp_policy": "active_passive" 00:26:05.063 } 00:26:05.063 } 00:26:05.063 ]' 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:05.063 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:05.322 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=053f174a-0dd1-4af1-b80f-a4131c49976c 00:26:05.322 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:05.322 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 053f174a-0dd1-4af1-b80f-a4131c49976c 00:26:05.581 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:26:05.842 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f9547cfc-9a37-410f-9e03-0dbf21a9dc77 00:26:05.842 18:00:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f9547cfc-9a37-410f-9e03-0dbf21a9dc77 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:06.101 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:06.102 { 00:26:06.102 "name": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:06.102 "aliases": [ 00:26:06.102 "lvs/nvme0n1p0" 00:26:06.102 ], 00:26:06.102 "product_name": "Logical Volume", 00:26:06.102 "block_size": 4096, 00:26:06.102 "num_blocks": 26476544, 00:26:06.102 "uuid": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:06.102 "assigned_rate_limits": { 00:26:06.102 "rw_ios_per_sec": 0, 00:26:06.102 "rw_mbytes_per_sec": 0, 00:26:06.102 "r_mbytes_per_sec": 0, 00:26:06.102 "w_mbytes_per_sec": 0 00:26:06.102 }, 00:26:06.102 "claimed": false, 00:26:06.102 "zoned": false, 00:26:06.102 "supported_io_types": { 00:26:06.102 "read": true, 00:26:06.102 "write": true, 00:26:06.102 "unmap": true, 00:26:06.102 "flush": false, 00:26:06.102 "reset": true, 00:26:06.102 "nvme_admin": false, 00:26:06.102 "nvme_io": false, 00:26:06.102 "nvme_io_md": false, 00:26:06.102 "write_zeroes": true, 00:26:06.102 "zcopy": false, 00:26:06.102 "get_zone_info": false, 00:26:06.102 "zone_management": false, 00:26:06.102 "zone_append": false, 00:26:06.102 "compare": false, 00:26:06.102 "compare_and_write": false, 00:26:06.102 "abort": false, 00:26:06.102 "seek_hole": true, 00:26:06.102 "seek_data": true, 00:26:06.102 "copy": false, 00:26:06.102 "nvme_iov_md": false 00:26:06.102 }, 00:26:06.102 "driver_specific": { 00:26:06.102 "lvol": { 00:26:06.102 "lvol_store_uuid": "f9547cfc-9a37-410f-9e03-0dbf21a9dc77", 00:26:06.102 "base_bdev": "nvme0n1", 00:26:06.102 "thin_provision": true, 00:26:06.102 "num_allocated_clusters": 0, 00:26:06.102 "snapshot": false, 00:26:06.102 "clone": false, 00:26:06.102 "esnap_clone": false 00:26:06.102 } 00:26:06.102 } 00:26:06.102 } 00:26:06.102 ]' 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:06.102 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:06.361 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:06.362 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:06.362 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:06.362 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:26:06.362 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:06.362 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:06.621 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:06.621 { 00:26:06.621 "name": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:06.621 "aliases": [ 00:26:06.621 "lvs/nvme0n1p0" 00:26:06.621 ], 00:26:06.621 "product_name": "Logical Volume", 00:26:06.621 "block_size": 4096, 00:26:06.621 "num_blocks": 26476544, 00:26:06.621 "uuid": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:06.621 "assigned_rate_limits": { 00:26:06.621 "rw_ios_per_sec": 0, 00:26:06.621 "rw_mbytes_per_sec": 0, 00:26:06.621 "r_mbytes_per_sec": 0, 00:26:06.621 "w_mbytes_per_sec": 0 00:26:06.621 }, 00:26:06.621 "claimed": false, 00:26:06.621 "zoned": false, 00:26:06.621 "supported_io_types": { 00:26:06.621 "read": true, 00:26:06.621 "write": true, 00:26:06.621 "unmap": true, 00:26:06.621 "flush": false, 00:26:06.621 "reset": true, 00:26:06.621 "nvme_admin": false, 00:26:06.621 "nvme_io": false, 00:26:06.621 "nvme_io_md": false, 00:26:06.621 "write_zeroes": true, 00:26:06.621 "zcopy": false, 00:26:06.621 "get_zone_info": false, 00:26:06.621 "zone_management": false, 00:26:06.621 "zone_append": false, 00:26:06.621 "compare": false, 00:26:06.621 "compare_and_write": false, 00:26:06.621 "abort": false, 00:26:06.621 "seek_hole": true, 00:26:06.621 "seek_data": true, 00:26:06.621 "copy": false, 00:26:06.621 "nvme_iov_md": false 00:26:06.621 }, 00:26:06.621 "driver_specific": { 00:26:06.622 "lvol": { 00:26:06.622 "lvol_store_uuid": "f9547cfc-9a37-410f-9e03-0dbf21a9dc77", 00:26:06.622 "base_bdev": "nvme0n1", 00:26:06.622 "thin_provision": true, 00:26:06.622 "num_allocated_clusters": 0, 00:26:06.622 "snapshot": false, 00:26:06.622 "clone": false, 00:26:06.622 "esnap_clone": false 00:26:06.622 } 00:26:06.622 } 00:26:06.622 } 00:26:06.622 ]' 00:26:06.622 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:06.880 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:06.880 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:06.880 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:06.880 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:06.881 18:00:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:06.881 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:26:06.881 18:00:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b5312554-0c5a-4906-a8c5-f53a5986d433 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:26:07.140 { 00:26:07.140 "name": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:07.140 "aliases": [ 00:26:07.140 "lvs/nvme0n1p0" 00:26:07.140 ], 00:26:07.140 "product_name": "Logical Volume", 00:26:07.140 "block_size": 4096, 00:26:07.140 "num_blocks": 26476544, 00:26:07.140 "uuid": "b5312554-0c5a-4906-a8c5-f53a5986d433", 00:26:07.140 "assigned_rate_limits": { 00:26:07.140 "rw_ios_per_sec": 0, 00:26:07.140 "rw_mbytes_per_sec": 0, 00:26:07.140 "r_mbytes_per_sec": 0, 00:26:07.140 "w_mbytes_per_sec": 0 00:26:07.140 }, 00:26:07.140 "claimed": false, 00:26:07.140 "zoned": false, 00:26:07.140 "supported_io_types": { 00:26:07.140 "read": true, 00:26:07.140 "write": true, 00:26:07.140 "unmap": true, 00:26:07.140 "flush": false, 00:26:07.140 "reset": true, 00:26:07.140 "nvme_admin": false, 00:26:07.140 "nvme_io": false, 00:26:07.140 "nvme_io_md": false, 00:26:07.140 "write_zeroes": true, 00:26:07.140 "zcopy": false, 00:26:07.140 "get_zone_info": false, 00:26:07.140 "zone_management": false, 00:26:07.140 "zone_append": false, 00:26:07.140 "compare": false, 00:26:07.140 "compare_and_write": false, 00:26:07.140 "abort": false, 00:26:07.140 "seek_hole": true, 00:26:07.140 "seek_data": true, 00:26:07.140 "copy": false, 00:26:07.140 "nvme_iov_md": false 00:26:07.140 }, 00:26:07.140 "driver_specific": { 00:26:07.140 "lvol": { 00:26:07.140 "lvol_store_uuid": "f9547cfc-9a37-410f-9e03-0dbf21a9dc77", 00:26:07.140 "base_bdev": "nvme0n1", 00:26:07.140 "thin_provision": true, 00:26:07.140 "num_allocated_clusters": 0, 00:26:07.140 "snapshot": false, 00:26:07.140 "clone": false, 00:26:07.140 "esnap_clone": false 00:26:07.140 } 00:26:07.140 } 00:26:07.140 } 00:26:07.140 ]' 00:26:07.140 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d b5312554-0c5a-4906-a8c5-f53a5986d433 --l2p_dram_limit 10' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:26:07.400 18:00:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b5312554-0c5a-4906-a8c5-f53a5986d433 --l2p_dram_limit 10 -c nvc0n1p0 00:26:07.400 [2024-11-20 18:00:34.549793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.549857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:07.400 [2024-11-20 18:00:34.549879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:07.400 [2024-11-20 18:00:34.549891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.549977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.549992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:07.400 [2024-11-20 18:00:34.550006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:26:07.400 [2024-11-20 18:00:34.550017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.550043] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:07.400 [2024-11-20 18:00:34.551220] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:07.400 [2024-11-20 18:00:34.551260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.551272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:07.400 [2024-11-20 18:00:34.551287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.220 ms 00:26:07.400 [2024-11-20 18:00:34.551298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.551387] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a338d17b-0145-4b0b-9a6a-09252efa416c 00:26:07.400 [2024-11-20 18:00:34.553762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.553807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:26:07.400 [2024-11-20 18:00:34.553820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:26:07.400 [2024-11-20 18:00:34.553836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.567707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.567742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:07.400 [2024-11-20 18:00:34.567755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.826 ms 00:26:07.400 [2024-11-20 18:00:34.567779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.567889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.567907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:07.400 [2024-11-20 18:00:34.567919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:26:07.400 [2024-11-20 18:00:34.567940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.568007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.400 [2024-11-20 18:00:34.568024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:07.400 [2024-11-20 18:00:34.568035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:26:07.400 [2024-11-20 18:00:34.568053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.400 [2024-11-20 18:00:34.568081] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:07.660 [2024-11-20 18:00:34.574462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.660 [2024-11-20 18:00:34.574493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:07.660 [2024-11-20 18:00:34.574511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.396 ms 00:26:07.660 [2024-11-20 18:00:34.574523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.660 [2024-11-20 18:00:34.574562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.660 [2024-11-20 18:00:34.574574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:07.660 [2024-11-20 18:00:34.574589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:07.660 [2024-11-20 18:00:34.574600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.660 [2024-11-20 18:00:34.574638] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:26:07.660 [2024-11-20 18:00:34.574784] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:07.660 [2024-11-20 18:00:34.574808] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:07.660 [2024-11-20 18:00:34.574823] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:07.660 [2024-11-20 18:00:34.574840] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:07.660 [2024-11-20 18:00:34.574854] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:07.660 [2024-11-20 18:00:34.574869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:07.660 [2024-11-20 18:00:34.574880] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:07.660 [2024-11-20 18:00:34.574898] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:07.660 [2024-11-20 18:00:34.574908] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:07.660 [2024-11-20 18:00:34.574922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.660 [2024-11-20 18:00:34.574935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:07.660 [2024-11-20 18:00:34.574950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:26:07.660 [2024-11-20 18:00:34.574972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.660 [2024-11-20 18:00:34.575052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.660 [2024-11-20 18:00:34.575064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:07.660 [2024-11-20 18:00:34.575078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:26:07.660 [2024-11-20 18:00:34.575088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.660 [2024-11-20 18:00:34.575190] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:07.660 [2024-11-20 18:00:34.575204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:07.660 [2024-11-20 18:00:34.575218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:07.660 [2024-11-20 18:00:34.575252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:07.660 [2024-11-20 18:00:34.575291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.660 [2024-11-20 18:00:34.575314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:07.660 [2024-11-20 18:00:34.575324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:07.660 [2024-11-20 18:00:34.575336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:07.660 [2024-11-20 18:00:34.575346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:07.660 [2024-11-20 18:00:34.575359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:07.660 [2024-11-20 18:00:34.575368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:07.660 [2024-11-20 18:00:34.575395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:07.660 [2024-11-20 18:00:34.575431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:07.660 [2024-11-20 18:00:34.575463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:07.660 [2024-11-20 18:00:34.575497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:07.660 [2024-11-20 18:00:34.575506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.660 [2024-11-20 18:00:34.575519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:07.660 [2024-11-20 18:00:34.575528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:07.661 [2024-11-20 18:00:34.575540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:07.661 [2024-11-20 18:00:34.575549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:07.661 [2024-11-20 18:00:34.575564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:07.661 [2024-11-20 18:00:34.575574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.661 [2024-11-20 18:00:34.575586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:07.661 [2024-11-20 18:00:34.575595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:07.661 [2024-11-20 18:00:34.575607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:07.661 [2024-11-20 18:00:34.575616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:07.661 [2024-11-20 18:00:34.575628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:07.661 [2024-11-20 18:00:34.575637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.661 [2024-11-20 18:00:34.575650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:07.661 [2024-11-20 18:00:34.575659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:07.661 [2024-11-20 18:00:34.575671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.661 [2024-11-20 18:00:34.575680] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:07.661 [2024-11-20 18:00:34.575693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:07.661 [2024-11-20 18:00:34.575703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:07.661 [2024-11-20 18:00:34.575717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:07.661 [2024-11-20 18:00:34.575728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:07.661 [2024-11-20 18:00:34.575743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:07.661 [2024-11-20 18:00:34.575753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:07.661 [2024-11-20 18:00:34.575775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:07.661 [2024-11-20 18:00:34.575786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:07.661 [2024-11-20 18:00:34.575799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:07.661 [2024-11-20 18:00:34.575815] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:07.661 [2024-11-20 18:00:34.575831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.575847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:07.661 [2024-11-20 18:00:34.575861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:07.661 [2024-11-20 18:00:34.575871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:07.661 [2024-11-20 18:00:34.575885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:07.661 [2024-11-20 18:00:34.575895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:07.661 [2024-11-20 18:00:34.575909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:07.661 [2024-11-20 18:00:34.575919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:07.661 [2024-11-20 18:00:34.575933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:07.661 [2024-11-20 18:00:34.575943] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:07.661 [2024-11-20 18:00:34.575960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.575971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.575984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.575995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.576009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:07.661 [2024-11-20 18:00:34.576020] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:07.661 [2024-11-20 18:00:34.576034] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.576046] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:07.661 [2024-11-20 18:00:34.576064] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:07.661 [2024-11-20 18:00:34.576074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:07.661 [2024-11-20 18:00:34.576088] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:07.661 [2024-11-20 18:00:34.576100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:07.661 [2024-11-20 18:00:34.576113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:07.661 [2024-11-20 18:00:34.576123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.972 ms 00:26:07.661 [2024-11-20 18:00:34.576137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:07.661 [2024-11-20 18:00:34.576181] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:26:07.661 [2024-11-20 18:00:34.576200] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:26:12.934 [2024-11-20 18:00:39.280580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.280670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:26:12.935 [2024-11-20 18:00:39.280690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4712.035 ms 00:26:12.935 [2024-11-20 18:00:39.280706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.328108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.328186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.935 [2024-11-20 18:00:39.328205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.010 ms 00:26:12.935 [2024-11-20 18:00:39.328221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.328394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.328412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:12.935 [2024-11-20 18:00:39.328424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:26:12.935 [2024-11-20 18:00:39.328447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.384604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.384669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.935 [2024-11-20 18:00:39.384686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.178 ms 00:26:12.935 [2024-11-20 18:00:39.384702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.384752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.384785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.935 [2024-11-20 18:00:39.384798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:12.935 [2024-11-20 18:00:39.384812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.385622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.385652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.935 [2024-11-20 18:00:39.385664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.738 ms 00:26:12.935 [2024-11-20 18:00:39.385678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.385809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.385826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.935 [2024-11-20 18:00:39.385842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:26:12.935 [2024-11-20 18:00:39.385858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.410853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.410912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.935 [2024-11-20 18:00:39.410927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.011 ms 00:26:12.935 [2024-11-20 18:00:39.410941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.438206] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:12.935 [2024-11-20 18:00:39.443239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.443273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:12.935 [2024-11-20 18:00:39.443292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.226 ms 00:26:12.935 [2024-11-20 18:00:39.443303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.590352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.590417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:26:12.935 [2024-11-20 18:00:39.590438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 147.235 ms 00:26:12.935 [2024-11-20 18:00:39.590451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.590669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.590689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:12.935 [2024-11-20 18:00:39.590708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.165 ms 00:26:12.935 [2024-11-20 18:00:39.590719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.628076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.628129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:26:12.935 [2024-11-20 18:00:39.628150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.340 ms 00:26:12.935 [2024-11-20 18:00:39.628162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.665073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.665123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:26:12.935 [2024-11-20 18:00:39.665144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.912 ms 00:26:12.935 [2024-11-20 18:00:39.665155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.665966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.665997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:12.935 [2024-11-20 18:00:39.666013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:26:12.935 [2024-11-20 18:00:39.666029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.796278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.796341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:26:12.935 [2024-11-20 18:00:39.796367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 130.389 ms 00:26:12.935 [2024-11-20 18:00:39.796379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.836313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.836367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:26:12.935 [2024-11-20 18:00:39.836386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.901 ms 00:26:12.935 [2024-11-20 18:00:39.836398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.873844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.873892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:26:12.935 [2024-11-20 18:00:39.873911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.453 ms 00:26:12.935 [2024-11-20 18:00:39.873921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.910619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.910665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:12.935 [2024-11-20 18:00:39.910684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.704 ms 00:26:12.935 [2024-11-20 18:00:39.910695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.910748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.910761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:12.935 [2024-11-20 18:00:39.910790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:12.935 [2024-11-20 18:00:39.910802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.910923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.935 [2024-11-20 18:00:39.910937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:12.935 [2024-11-20 18:00:39.910954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:26:12.935 [2024-11-20 18:00:39.910965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.935 [2024-11-20 18:00:39.912335] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 5370.714 ms, result 0 00:26:12.935 { 00:26:12.935 "name": "ftl0", 00:26:12.935 "uuid": "a338d17b-0145-4b0b-9a6a-09252efa416c" 00:26:12.935 } 00:26:12.935 18:00:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:26:12.935 18:00:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:26:13.194 18:00:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:26:13.194 18:00:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:26:13.194 18:00:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:26:13.454 /dev/nbd0 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:26:13.454 1+0 records in 00:26:13.454 1+0 records out 00:26:13.454 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250037 s, 16.4 MB/s 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:26:13.454 18:00:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:26:13.454 [2024-11-20 18:00:40.513044] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:26:13.454 [2024-11-20 18:00:40.513809] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81344 ] 00:26:13.713 [2024-11-20 18:00:40.693878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.713 [2024-11-20 18:00:40.804231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:15.091  [2024-11-20T18:00:43.206Z] Copying: 201/1024 [MB] (201 MBps) [2024-11-20T18:00:44.147Z] Copying: 404/1024 [MB] (203 MBps) [2024-11-20T18:00:45.526Z] Copying: 606/1024 [MB] (201 MBps) [2024-11-20T18:00:46.462Z] Copying: 803/1024 [MB] (197 MBps) [2024-11-20T18:00:46.462Z] Copying: 1004/1024 [MB] (201 MBps) [2024-11-20T18:00:47.399Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:26:20.223 00:26:20.223 18:00:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:22.127 18:00:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:26:22.127 [2024-11-20 18:00:49.178175] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:26:22.127 [2024-11-20 18:00:49.179093] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81437 ] 00:26:22.385 [2024-11-20 18:00:49.377473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.385 [2024-11-20 18:00:49.487567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.763  [2024-11-20T18:00:51.877Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-20T18:00:52.814Z] Copying: 33/1024 [MB] (16 MBps) [2024-11-20T18:00:54.195Z] Copying: 49/1024 [MB] (16 MBps) [2024-11-20T18:00:55.133Z] Copying: 66/1024 [MB] (16 MBps) [2024-11-20T18:00:56.071Z] Copying: 83/1024 [MB] (16 MBps) [2024-11-20T18:00:57.009Z] Copying: 100/1024 [MB] (17 MBps) [2024-11-20T18:00:57.947Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-20T18:00:58.909Z] Copying: 134/1024 [MB] (17 MBps) [2024-11-20T18:00:59.847Z] Copying: 151/1024 [MB] (16 MBps) [2024-11-20T18:01:01.227Z] Copying: 167/1024 [MB] (16 MBps) [2024-11-20T18:01:01.795Z] Copying: 184/1024 [MB] (16 MBps) [2024-11-20T18:01:03.175Z] Copying: 201/1024 [MB] (16 MBps) [2024-11-20T18:01:04.115Z] Copying: 218/1024 [MB] (16 MBps) [2024-11-20T18:01:05.055Z] Copying: 235/1024 [MB] (16 MBps) [2024-11-20T18:01:05.994Z] Copying: 251/1024 [MB] (16 MBps) [2024-11-20T18:01:06.933Z] Copying: 268/1024 [MB] (16 MBps) [2024-11-20T18:01:07.873Z] Copying: 284/1024 [MB] (16 MBps) [2024-11-20T18:01:08.811Z] Copying: 301/1024 [MB] (16 MBps) [2024-11-20T18:01:10.206Z] Copying: 317/1024 [MB] (16 MBps) [2024-11-20T18:01:11.143Z] Copying: 334/1024 [MB] (16 MBps) [2024-11-20T18:01:12.080Z] Copying: 350/1024 [MB] (16 MBps) [2024-11-20T18:01:13.019Z] Copying: 366/1024 [MB] (16 MBps) [2024-11-20T18:01:13.961Z] Copying: 383/1024 [MB] (16 MBps) [2024-11-20T18:01:14.904Z] Copying: 399/1024 [MB] (16 MBps) [2024-11-20T18:01:15.840Z] Copying: 416/1024 [MB] (16 MBps) [2024-11-20T18:01:16.779Z] Copying: 433/1024 [MB] (16 MBps) [2024-11-20T18:01:18.159Z] Copying: 449/1024 [MB] (16 MBps) [2024-11-20T18:01:19.097Z] Copying: 465/1024 [MB] (16 MBps) [2024-11-20T18:01:20.031Z] Copying: 481/1024 [MB] (16 MBps) [2024-11-20T18:01:20.967Z] Copying: 498/1024 [MB] (16 MBps) [2024-11-20T18:01:21.905Z] Copying: 515/1024 [MB] (17 MBps) [2024-11-20T18:01:22.841Z] Copying: 532/1024 [MB] (17 MBps) [2024-11-20T18:01:23.779Z] Copying: 550/1024 [MB] (17 MBps) [2024-11-20T18:01:25.157Z] Copying: 566/1024 [MB] (16 MBps) [2024-11-20T18:01:26.094Z] Copying: 584/1024 [MB] (17 MBps) [2024-11-20T18:01:27.030Z] Copying: 601/1024 [MB] (17 MBps) [2024-11-20T18:01:27.991Z] Copying: 618/1024 [MB] (16 MBps) [2024-11-20T18:01:28.928Z] Copying: 635/1024 [MB] (17 MBps) [2024-11-20T18:01:29.864Z] Copying: 652/1024 [MB] (17 MBps) [2024-11-20T18:01:30.801Z] Copying: 670/1024 [MB] (17 MBps) [2024-11-20T18:01:32.180Z] Copying: 686/1024 [MB] (16 MBps) [2024-11-20T18:01:32.749Z] Copying: 703/1024 [MB] (16 MBps) [2024-11-20T18:01:34.127Z] Copying: 720/1024 [MB] (16 MBps) [2024-11-20T18:01:35.064Z] Copying: 736/1024 [MB] (16 MBps) [2024-11-20T18:01:36.001Z] Copying: 753/1024 [MB] (16 MBps) [2024-11-20T18:01:36.938Z] Copying: 769/1024 [MB] (16 MBps) [2024-11-20T18:01:37.883Z] Copying: 786/1024 [MB] (16 MBps) [2024-11-20T18:01:38.823Z] Copying: 803/1024 [MB] (16 MBps) [2024-11-20T18:01:39.760Z] Copying: 820/1024 [MB] (17 MBps) [2024-11-20T18:01:41.156Z] Copying: 837/1024 [MB] (17 MBps) [2024-11-20T18:01:41.752Z] Copying: 855/1024 [MB] (17 MBps) [2024-11-20T18:01:43.129Z] Copying: 872/1024 [MB] (17 MBps) [2024-11-20T18:01:44.066Z] Copying: 890/1024 [MB] (17 MBps) [2024-11-20T18:01:45.002Z] Copying: 908/1024 [MB] (17 MBps) [2024-11-20T18:01:45.939Z] Copying: 925/1024 [MB] (17 MBps) [2024-11-20T18:01:46.876Z] Copying: 943/1024 [MB] (17 MBps) [2024-11-20T18:01:47.814Z] Copying: 960/1024 [MB] (17 MBps) [2024-11-20T18:01:48.752Z] Copying: 977/1024 [MB] (16 MBps) [2024-11-20T18:01:50.129Z] Copying: 993/1024 [MB] (16 MBps) [2024-11-20T18:01:50.697Z] Copying: 1010/1024 [MB] (16 MBps) [2024-11-20T18:01:52.074Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:27:24.898 00:27:24.898 18:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:27:24.898 18:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:27:24.898 18:01:51 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:27:25.157 [2024-11-20 18:01:52.160044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.160099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:25.157 [2024-11-20 18:01:52.160116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:25.157 [2024-11-20 18:01:52.160131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.160160] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:25.157 [2024-11-20 18:01:52.164816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.164849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:25.157 [2024-11-20 18:01:52.164866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:27:25.157 [2024-11-20 18:01:52.164876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.167145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.167324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:25.157 [2024-11-20 18:01:52.167352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.235 ms 00:27:25.157 [2024-11-20 18:01:52.167364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.186502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.186669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:25.157 [2024-11-20 18:01:52.186698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.138 ms 00:27:25.157 [2024-11-20 18:01:52.186710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.191495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.191527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:25.157 [2024-11-20 18:01:52.191542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.746 ms 00:27:25.157 [2024-11-20 18:01:52.191552] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.227377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.227413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:25.157 [2024-11-20 18:01:52.227430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.798 ms 00:27:25.157 [2024-11-20 18:01:52.227439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.249256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.249405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:25.157 [2024-11-20 18:01:52.249458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.805 ms 00:27:25.157 [2024-11-20 18:01:52.249473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.249660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.249675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:25.157 [2024-11-20 18:01:52.249691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:27:25.157 [2024-11-20 18:01:52.249701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.157 [2024-11-20 18:01:52.285199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.157 [2024-11-20 18:01:52.285234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:25.157 [2024-11-20 18:01:52.285250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.512 ms 00:27:25.157 [2024-11-20 18:01:52.285260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.158 [2024-11-20 18:01:52.319565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.158 [2024-11-20 18:01:52.319600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:25.158 [2024-11-20 18:01:52.319616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.297 ms 00:27:25.158 [2024-11-20 18:01:52.319626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.417 [2024-11-20 18:01:52.353897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.417 [2024-11-20 18:01:52.354064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:25.417 [2024-11-20 18:01:52.354089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.278 ms 00:27:25.417 [2024-11-20 18:01:52.354100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.417 [2024-11-20 18:01:52.387313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.417 [2024-11-20 18:01:52.387347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:25.417 [2024-11-20 18:01:52.387363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.134 ms 00:27:25.417 [2024-11-20 18:01:52.387372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.417 [2024-11-20 18:01:52.387414] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:25.418 [2024-11-20 18:01:52.387438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.387990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:25.418 [2024-11-20 18:01:52.388515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:25.419 [2024-11-20 18:01:52.388690] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:25.419 [2024-11-20 18:01:52.388702] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a338d17b-0145-4b0b-9a6a-09252efa416c 00:27:25.419 [2024-11-20 18:01:52.388713] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:27:25.419 [2024-11-20 18:01:52.388728] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:25.419 [2024-11-20 18:01:52.388738] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:25.419 [2024-11-20 18:01:52.388755] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:25.419 [2024-11-20 18:01:52.388764] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:25.419 [2024-11-20 18:01:52.388789] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:25.419 [2024-11-20 18:01:52.388798] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:25.419 [2024-11-20 18:01:52.388810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:25.419 [2024-11-20 18:01:52.388819] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:25.419 [2024-11-20 18:01:52.388831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.419 [2024-11-20 18:01:52.388847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:25.419 [2024-11-20 18:01:52.388860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.421 ms 00:27:25.419 [2024-11-20 18:01:52.388870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.408715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.419 [2024-11-20 18:01:52.408750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:25.419 [2024-11-20 18:01:52.408782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.811 ms 00:27:25.419 [2024-11-20 18:01:52.408794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.409363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:25.419 [2024-11-20 18:01:52.409387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:25.419 [2024-11-20 18:01:52.409402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:27:25.419 [2024-11-20 18:01:52.409411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.477807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.419 [2024-11-20 18:01:52.477846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:25.419 [2024-11-20 18:01:52.477864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.419 [2024-11-20 18:01:52.477876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.477948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.419 [2024-11-20 18:01:52.477960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:25.419 [2024-11-20 18:01:52.477974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.419 [2024-11-20 18:01:52.477984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.478081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.419 [2024-11-20 18:01:52.478099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:25.419 [2024-11-20 18:01:52.478121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.419 [2024-11-20 18:01:52.478132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.419 [2024-11-20 18:01:52.478159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.419 [2024-11-20 18:01:52.478171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:25.419 [2024-11-20 18:01:52.478185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.419 [2024-11-20 18:01:52.478194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.608468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.608718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:25.678 [2024-11-20 18:01:52.608748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.608771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.709136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.709186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:25.678 [2024-11-20 18:01:52.709205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.709217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.709360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.709374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:25.678 [2024-11-20 18:01:52.709389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.709403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.709480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.709493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:25.678 [2024-11-20 18:01:52.709508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.709518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.709658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.709672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:25.678 [2024-11-20 18:01:52.709686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.709699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.709742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.709755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:25.678 [2024-11-20 18:01:52.709974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.678 [2024-11-20 18:01:52.710042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.678 [2024-11-20 18:01:52.710136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.678 [2024-11-20 18:01:52.710172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:25.678 [2024-11-20 18:01:52.710207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.679 [2024-11-20 18:01:52.710239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.679 [2024-11-20 18:01:52.710332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:25.679 [2024-11-20 18:01:52.710432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:25.679 [2024-11-20 18:01:52.710475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:25.679 [2024-11-20 18:01:52.710506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:25.679 [2024-11-20 18:01:52.710725] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 551.507 ms, result 0 00:27:25.679 true 00:27:25.679 18:01:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81180 00:27:25.679 18:01:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81180 00:27:25.679 18:01:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:27:25.679 [2024-11-20 18:01:52.836191] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:27:25.679 [2024-11-20 18:01:52.836325] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82077 ] 00:27:25.937 [2024-11-20 18:01:53.018136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.195 [2024-11-20 18:01:53.149963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:27.573  [2024-11-20T18:01:55.685Z] Copying: 194/1024 [MB] (194 MBps) [2024-11-20T18:01:56.622Z] Copying: 399/1024 [MB] (204 MBps) [2024-11-20T18:01:57.558Z] Copying: 603/1024 [MB] (203 MBps) [2024-11-20T18:01:58.936Z] Copying: 807/1024 [MB] (203 MBps) [2024-11-20T18:01:58.936Z] Copying: 1006/1024 [MB] (199 MBps) [2024-11-20T18:01:59.872Z] Copying: 1024/1024 [MB] (average 201 MBps) 00:27:32.696 00:27:32.696 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81180 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:27:32.696 18:01:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:32.954 [2024-11-20 18:01:59.921933] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:27:32.954 [2024-11-20 18:01:59.922919] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82151 ] 00:27:32.954 [2024-11-20 18:02:00.106259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.212 [2024-11-20 18:02:00.235442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:33.471 [2024-11-20 18:02:00.641159] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.471 [2024-11-20 18:02:00.641465] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:33.729 [2024-11-20 18:02:00.708188] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:27:33.729 [2024-11-20 18:02:00.708727] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:27:33.729 [2024-11-20 18:02:00.708991] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:27:33.990 [2024-11-20 18:02:01.032440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.032492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:33.990 [2024-11-20 18:02:01.032511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:33.990 [2024-11-20 18:02:01.032523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.032580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.032593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:33.990 [2024-11-20 18:02:01.032605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:27:33.990 [2024-11-20 18:02:01.032616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.032639] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:33.990 [2024-11-20 18:02:01.033734] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:33.990 [2024-11-20 18:02:01.033780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.033794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:33.990 [2024-11-20 18:02:01.033806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.134 ms 00:27:33.990 [2024-11-20 18:02:01.033817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.036175] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:33.990 [2024-11-20 18:02:01.056779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.056835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:33.990 [2024-11-20 18:02:01.056852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.638 ms 00:27:33.990 [2024-11-20 18:02:01.056863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.056929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.056942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:33.990 [2024-11-20 18:02:01.056954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:33.990 [2024-11-20 18:02:01.056964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.068848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.068879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:33.990 [2024-11-20 18:02:01.068892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.829 ms 00:27:33.990 [2024-11-20 18:02:01.068918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.069006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.069020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:33.990 [2024-11-20 18:02:01.069032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:33.990 [2024-11-20 18:02:01.069042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.069103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.069116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:33.990 [2024-11-20 18:02:01.069127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:27:33.990 [2024-11-20 18:02:01.069137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.069163] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:33.990 [2024-11-20 18:02:01.074916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.074947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:33.990 [2024-11-20 18:02:01.074960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.769 ms 00:27:33.990 [2024-11-20 18:02:01.074985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.075018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.075029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:33.990 [2024-11-20 18:02:01.075040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:33.990 [2024-11-20 18:02:01.075050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.075092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:33.990 [2024-11-20 18:02:01.075116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:33.990 [2024-11-20 18:02:01.075153] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:33.990 [2024-11-20 18:02:01.075173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:33.990 [2024-11-20 18:02:01.075264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:33.990 [2024-11-20 18:02:01.075277] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:33.990 [2024-11-20 18:02:01.075291] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:33.990 [2024-11-20 18:02:01.075305] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:33.990 [2024-11-20 18:02:01.075321] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:33.990 [2024-11-20 18:02:01.075332] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:33.990 [2024-11-20 18:02:01.075343] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:33.990 [2024-11-20 18:02:01.075353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:33.990 [2024-11-20 18:02:01.075363] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:33.990 [2024-11-20 18:02:01.075373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.075383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:33.990 [2024-11-20 18:02:01.075394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:27:33.990 [2024-11-20 18:02:01.075404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.075473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.990 [2024-11-20 18:02:01.075488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:33.990 [2024-11-20 18:02:01.075499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:33.990 [2024-11-20 18:02:01.075509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.990 [2024-11-20 18:02:01.075606] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:33.990 [2024-11-20 18:02:01.075621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:33.990 [2024-11-20 18:02:01.075632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.990 [2024-11-20 18:02:01.075643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.990 [2024-11-20 18:02:01.075653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:33.990 [2024-11-20 18:02:01.075663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:33.990 [2024-11-20 18:02:01.075672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:33.990 [2024-11-20 18:02:01.075683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:33.990 [2024-11-20 18:02:01.075693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:33.990 [2024-11-20 18:02:01.075702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.990 [2024-11-20 18:02:01.075713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:33.990 [2024-11-20 18:02:01.075733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:33.990 [2024-11-20 18:02:01.075743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:33.990 [2024-11-20 18:02:01.075752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:33.990 [2024-11-20 18:02:01.075762] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:33.990 [2024-11-20 18:02:01.075772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:33.991 [2024-11-20 18:02:01.075805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:33.991 [2024-11-20 18:02:01.075814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:33.991 [2024-11-20 18:02:01.075833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.991 [2024-11-20 18:02:01.075852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:33.991 [2024-11-20 18:02:01.075861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.991 [2024-11-20 18:02:01.075880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:33.991 [2024-11-20 18:02:01.075889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.991 [2024-11-20 18:02:01.075907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:33.991 [2024-11-20 18:02:01.075917] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:33.991 [2024-11-20 18:02:01.075935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:33.991 [2024-11-20 18:02:01.075944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:33.991 [2024-11-20 18:02:01.075953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.991 [2024-11-20 18:02:01.075961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:33.991 [2024-11-20 18:02:01.075971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:33.991 [2024-11-20 18:02:01.075980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:33.991 [2024-11-20 18:02:01.075989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:33.991 [2024-11-20 18:02:01.075998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:33.991 [2024-11-20 18:02:01.076006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.991 [2024-11-20 18:02:01.076015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:33.991 [2024-11-20 18:02:01.076024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:33.991 [2024-11-20 18:02:01.076037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.991 [2024-11-20 18:02:01.076047] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:33.991 [2024-11-20 18:02:01.076057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:33.991 [2024-11-20 18:02:01.076068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:33.991 [2024-11-20 18:02:01.076094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:33.991 [2024-11-20 18:02:01.076105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:33.991 [2024-11-20 18:02:01.076115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:33.991 [2024-11-20 18:02:01.076125] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:33.991 [2024-11-20 18:02:01.076134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:33.991 [2024-11-20 18:02:01.076143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:33.991 [2024-11-20 18:02:01.076152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:33.991 [2024-11-20 18:02:01.076163] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:33.991 [2024-11-20 18:02:01.076175] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:33.991 [2024-11-20 18:02:01.076199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:33.991 [2024-11-20 18:02:01.076211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:33.991 [2024-11-20 18:02:01.076221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:33.991 [2024-11-20 18:02:01.076232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:33.991 [2024-11-20 18:02:01.076242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:33.991 [2024-11-20 18:02:01.076252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:33.991 [2024-11-20 18:02:01.076262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:33.991 [2024-11-20 18:02:01.076273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:33.991 [2024-11-20 18:02:01.076283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:33.991 [2024-11-20 18:02:01.076332] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:33.991 [2024-11-20 18:02:01.076343] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:33.991 [2024-11-20 18:02:01.076365] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:33.991 [2024-11-20 18:02:01.076375] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:33.991 [2024-11-20 18:02:01.076387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:33.991 [2024-11-20 18:02:01.076398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.991 [2024-11-20 18:02:01.076409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:33.991 [2024-11-20 18:02:01.076419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.846 ms 00:27:33.991 [2024-11-20 18:02:01.076429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.991 [2024-11-20 18:02:01.125064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.991 [2024-11-20 18:02:01.125106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:33.991 [2024-11-20 18:02:01.125121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.638 ms 00:27:33.991 [2024-11-20 18:02:01.125149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:33.991 [2024-11-20 18:02:01.125234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:33.991 [2024-11-20 18:02:01.125251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:33.991 [2024-11-20 18:02:01.125262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:27:33.991 [2024-11-20 18:02:01.125273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.185844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.185889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:34.251 [2024-11-20 18:02:01.185909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.566 ms 00:27:34.251 [2024-11-20 18:02:01.185920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.185965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.185977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:34.251 [2024-11-20 18:02:01.185988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.251 [2024-11-20 18:02:01.185999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.186810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.186831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:34.251 [2024-11-20 18:02:01.186843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:27:34.251 [2024-11-20 18:02:01.186854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.186997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.187014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:34.251 [2024-11-20 18:02:01.187025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:27:34.251 [2024-11-20 18:02:01.187036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.209740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.209791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:34.251 [2024-11-20 18:02:01.209807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.717 ms 00:27:34.251 [2024-11-20 18:02:01.209818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.229828] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:34.251 [2024-11-20 18:02:01.229866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:34.251 [2024-11-20 18:02:01.229884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.229896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:34.251 [2024-11-20 18:02:01.229909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.971 ms 00:27:34.251 [2024-11-20 18:02:01.229920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.261241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.261283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:34.251 [2024-11-20 18:02:01.261314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.326 ms 00:27:34.251 [2024-11-20 18:02:01.261326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.280174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.280213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:34.251 [2024-11-20 18:02:01.280228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.822 ms 00:27:34.251 [2024-11-20 18:02:01.280240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.251 [2024-11-20 18:02:01.298599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.251 [2024-11-20 18:02:01.298639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:34.251 [2024-11-20 18:02:01.298654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.346 ms 00:27:34.251 [2024-11-20 18:02:01.298665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.299430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.299464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:34.252 [2024-11-20 18:02:01.299479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:27:34.252 [2024-11-20 18:02:01.299490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.396338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.396422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:34.252 [2024-11-20 18:02:01.396442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.979 ms 00:27:34.252 [2024-11-20 18:02:01.396455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.408245] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:34.252 [2024-11-20 18:02:01.413160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.413313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:34.252 [2024-11-20 18:02:01.413341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.663 ms 00:27:34.252 [2024-11-20 18:02:01.413352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.413502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.413517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:34.252 [2024-11-20 18:02:01.413530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:34.252 [2024-11-20 18:02:01.413542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.413638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.413652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:34.252 [2024-11-20 18:02:01.413664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:27:34.252 [2024-11-20 18:02:01.413675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.413704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.413721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:34.252 [2024-11-20 18:02:01.413732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:34.252 [2024-11-20 18:02:01.413744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.252 [2024-11-20 18:02:01.413802] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:34.252 [2024-11-20 18:02:01.413817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.252 [2024-11-20 18:02:01.413829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:34.252 [2024-11-20 18:02:01.413841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:27:34.252 [2024-11-20 18:02:01.413852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.510 [2024-11-20 18:02:01.451835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.510 [2024-11-20 18:02:01.452007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:34.510 [2024-11-20 18:02:01.452034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.012 ms 00:27:34.510 [2024-11-20 18:02:01.452045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.510 [2024-11-20 18:02:01.452179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.510 [2024-11-20 18:02:01.452195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:34.510 [2024-11-20 18:02:01.452208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:34.510 [2024-11-20 18:02:01.452220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.510 [2024-11-20 18:02:01.453699] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 421.398 ms, result 0 00:27:35.445  [2024-11-20T18:02:03.558Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-20T18:02:04.494Z] Copying: 45/1024 [MB] (22 MBps) [2024-11-20T18:02:05.870Z] Copying: 68/1024 [MB] (23 MBps) [2024-11-20T18:02:06.808Z] Copying: 91/1024 [MB] (22 MBps) [2024-11-20T18:02:07.743Z] Copying: 114/1024 [MB] (23 MBps) [2024-11-20T18:02:08.705Z] Copying: 137/1024 [MB] (22 MBps) [2024-11-20T18:02:09.642Z] Copying: 160/1024 [MB] (23 MBps) [2024-11-20T18:02:10.579Z] Copying: 183/1024 [MB] (22 MBps) [2024-11-20T18:02:11.518Z] Copying: 205/1024 [MB] (22 MBps) [2024-11-20T18:02:12.456Z] Copying: 229/1024 [MB] (23 MBps) [2024-11-20T18:02:13.835Z] Copying: 251/1024 [MB] (22 MBps) [2024-11-20T18:02:14.772Z] Copying: 275/1024 [MB] (23 MBps) [2024-11-20T18:02:15.708Z] Copying: 298/1024 [MB] (23 MBps) [2024-11-20T18:02:16.645Z] Copying: 322/1024 [MB] (24 MBps) [2024-11-20T18:02:17.582Z] Copying: 345/1024 [MB] (22 MBps) [2024-11-20T18:02:18.520Z] Copying: 369/1024 [MB] (23 MBps) [2024-11-20T18:02:19.472Z] Copying: 394/1024 [MB] (24 MBps) [2024-11-20T18:02:20.864Z] Copying: 417/1024 [MB] (23 MBps) [2024-11-20T18:02:21.801Z] Copying: 441/1024 [MB] (23 MBps) [2024-11-20T18:02:22.738Z] Copying: 465/1024 [MB] (24 MBps) [2024-11-20T18:02:23.675Z] Copying: 489/1024 [MB] (23 MBps) [2024-11-20T18:02:24.610Z] Copying: 512/1024 [MB] (23 MBps) [2024-11-20T18:02:25.548Z] Copying: 535/1024 [MB] (23 MBps) [2024-11-20T18:02:26.485Z] Copying: 558/1024 [MB] (23 MBps) [2024-11-20T18:02:27.863Z] Copying: 582/1024 [MB] (23 MBps) [2024-11-20T18:02:28.431Z] Copying: 605/1024 [MB] (22 MBps) [2024-11-20T18:02:29.807Z] Copying: 628/1024 [MB] (23 MBps) [2024-11-20T18:02:30.743Z] Copying: 651/1024 [MB] (23 MBps) [2024-11-20T18:02:31.680Z] Copying: 675/1024 [MB] (23 MBps) [2024-11-20T18:02:32.617Z] Copying: 698/1024 [MB] (22 MBps) [2024-11-20T18:02:33.554Z] Copying: 721/1024 [MB] (23 MBps) [2024-11-20T18:02:34.492Z] Copying: 744/1024 [MB] (23 MBps) [2024-11-20T18:02:35.432Z] Copying: 767/1024 [MB] (22 MBps) [2024-11-20T18:02:36.814Z] Copying: 789/1024 [MB] (22 MBps) [2024-11-20T18:02:37.753Z] Copying: 813/1024 [MB] (23 MBps) [2024-11-20T18:02:38.690Z] Copying: 835/1024 [MB] (22 MBps) [2024-11-20T18:02:39.714Z] Copying: 858/1024 [MB] (22 MBps) [2024-11-20T18:02:40.650Z] Copying: 881/1024 [MB] (22 MBps) [2024-11-20T18:02:41.587Z] Copying: 903/1024 [MB] (22 MBps) [2024-11-20T18:02:42.525Z] Copying: 926/1024 [MB] (22 MBps) [2024-11-20T18:02:43.464Z] Copying: 949/1024 [MB] (22 MBps) [2024-11-20T18:02:44.401Z] Copying: 971/1024 [MB] (22 MBps) [2024-11-20T18:02:45.780Z] Copying: 994/1024 [MB] (22 MBps) [2024-11-20T18:02:46.348Z] Copying: 1017/1024 [MB] (22 MBps) [2024-11-20T18:02:46.348Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-20 18:02:46.324853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.172 [2024-11-20 18:02:46.324943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:19.172 [2024-11-20 18:02:46.324962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:19.172 [2024-11-20 18:02:46.324974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.172 [2024-11-20 18:02:46.326350] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:19.172 [2024-11-20 18:02:46.332420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.172 [2024-11-20 18:02:46.332582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:19.172 [2024-11-20 18:02:46.332603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.040 ms 00:28:19.172 [2024-11-20 18:02:46.332615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.172 [2024-11-20 18:02:46.344565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.172 [2024-11-20 18:02:46.344705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:19.172 [2024-11-20 18:02:46.344804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.616 ms 00:28:19.172 [2024-11-20 18:02:46.344843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.367368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.367514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:19.430 [2024-11-20 18:02:46.367598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.515 ms 00:28:19.430 [2024-11-20 18:02:46.367635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.372535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.372664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:19.430 [2024-11-20 18:02:46.372831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.852 ms 00:28:19.430 [2024-11-20 18:02:46.372886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.409070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.409205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:19.430 [2024-11-20 18:02:46.409344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.138 ms 00:28:19.430 [2024-11-20 18:02:46.409381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.430386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.430529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:19.430 [2024-11-20 18:02:46.430642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.957 ms 00:28:19.430 [2024-11-20 18:02:46.430679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.542616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.542779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:19.430 [2024-11-20 18:02:46.542863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 112.057 ms 00:28:19.430 [2024-11-20 18:02:46.542898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.430 [2024-11-20 18:02:46.578379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.430 [2024-11-20 18:02:46.578513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:19.430 [2024-11-20 18:02:46.578585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.495 ms 00:28:19.430 [2024-11-20 18:02:46.578619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.690 [2024-11-20 18:02:46.613697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.690 [2024-11-20 18:02:46.613837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:19.690 [2024-11-20 18:02:46.613909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.053 ms 00:28:19.690 [2024-11-20 18:02:46.613943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.690 [2024-11-20 18:02:46.648949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.690 [2024-11-20 18:02:46.649074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:19.690 [2024-11-20 18:02:46.649158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.008 ms 00:28:19.691 [2024-11-20 18:02:46.649193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.691 [2024-11-20 18:02:46.682929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.691 [2024-11-20 18:02:46.683053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:19.691 [2024-11-20 18:02:46.683137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.695 ms 00:28:19.691 [2024-11-20 18:02:46.683172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.691 [2024-11-20 18:02:46.683226] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:19.691 [2024-11-20 18:02:46.683267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 91904 / 261120 wr_cnt: 1 state: open 00:28:19.691 [2024-11-20 18:02:46.683317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.683905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.684991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:19.691 [2024-11-20 18:02:46.685293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:19.692 [2024-11-20 18:02:46.685507] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:19.692 [2024-11-20 18:02:46.685518] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a338d17b-0145-4b0b-9a6a-09252efa416c 00:28:19.692 [2024-11-20 18:02:46.685530] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 91904 00:28:19.692 [2024-11-20 18:02:46.685546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 92864 00:28:19.692 [2024-11-20 18:02:46.685567] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 91904 00:28:19.692 [2024-11-20 18:02:46.685579] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:28:19.692 [2024-11-20 18:02:46.685589] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:19.692 [2024-11-20 18:02:46.685599] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:19.692 [2024-11-20 18:02:46.685612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:19.692 [2024-11-20 18:02:46.685622] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:19.692 [2024-11-20 18:02:46.685631] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:19.692 [2024-11-20 18:02:46.685642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.692 [2024-11-20 18:02:46.685654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:19.692 [2024-11-20 18:02:46.685665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.421 ms 00:28:19.692 [2024-11-20 18:02:46.685674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.705874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.692 [2024-11-20 18:02:46.706030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:19.692 [2024-11-20 18:02:46.706051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.181 ms 00:28:19.692 [2024-11-20 18:02:46.706062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.706673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:19.692 [2024-11-20 18:02:46.706688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:19.692 [2024-11-20 18:02:46.706700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.590 ms 00:28:19.692 [2024-11-20 18:02:46.706717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.761236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.692 [2024-11-20 18:02:46.761273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:19.692 [2024-11-20 18:02:46.761287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.692 [2024-11-20 18:02:46.761299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.761356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.692 [2024-11-20 18:02:46.761367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:19.692 [2024-11-20 18:02:46.761378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.692 [2024-11-20 18:02:46.761393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.761487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.692 [2024-11-20 18:02:46.761501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:19.692 [2024-11-20 18:02:46.761513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.692 [2024-11-20 18:02:46.761523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.692 [2024-11-20 18:02:46.761542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.692 [2024-11-20 18:02:46.761554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:19.692 [2024-11-20 18:02:46.761565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.692 [2024-11-20 18:02:46.761575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.890447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.890512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:19.951 [2024-11-20 18:02:46.890546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.890558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.992970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:19.951 [2024-11-20 18:02:46.993236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:19.951 [2024-11-20 18:02:46.993442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:19.951 [2024-11-20 18:02:46.993528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:19.951 [2024-11-20 18:02:46.993702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:19.951 [2024-11-20 18:02:46.993800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:19.951 [2024-11-20 18:02:46.993889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.993948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:19.951 [2024-11-20 18:02:46.993960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:19.951 [2024-11-20 18:02:46.993972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:19.951 [2024-11-20 18:02:46.993983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:19.951 [2024-11-20 18:02:46.994132] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 672.484 ms, result 0 00:28:21.856 00:28:21.856 00:28:21.856 18:02:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:28:23.757 18:02:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:23.757 [2024-11-20 18:02:50.495525] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:28:23.757 [2024-11-20 18:02:50.495658] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82655 ] 00:28:23.757 [2024-11-20 18:02:50.679931] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.757 [2024-11-20 18:02:50.807466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:24.326 [2024-11-20 18:02:51.217604] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.326 [2024-11-20 18:02:51.217992] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:24.326 [2024-11-20 18:02:51.383453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.383504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:24.326 [2024-11-20 18:02:51.383526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:24.326 [2024-11-20 18:02:51.383536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.326 [2024-11-20 18:02:51.383585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.383598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:24.326 [2024-11-20 18:02:51.383612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:28:24.326 [2024-11-20 18:02:51.383622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.326 [2024-11-20 18:02:51.383645] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:24.326 [2024-11-20 18:02:51.384646] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:24.326 [2024-11-20 18:02:51.384674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.384686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:24.326 [2024-11-20 18:02:51.384698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.036 ms 00:28:24.326 [2024-11-20 18:02:51.384709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.326 [2024-11-20 18:02:51.387049] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:24.326 [2024-11-20 18:02:51.406813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.406848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:24.326 [2024-11-20 18:02:51.406864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.797 ms 00:28:24.326 [2024-11-20 18:02:51.406875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.326 [2024-11-20 18:02:51.407026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.407040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:24.326 [2024-11-20 18:02:51.407051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:28:24.326 [2024-11-20 18:02:51.407062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.326 [2024-11-20 18:02:51.419098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.326 [2024-11-20 18:02:51.419127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:24.327 [2024-11-20 18:02:51.419145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.978 ms 00:28:24.327 [2024-11-20 18:02:51.419172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.419261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.419275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:24.327 [2024-11-20 18:02:51.419287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:28:24.327 [2024-11-20 18:02:51.419298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.419354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.419367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:24.327 [2024-11-20 18:02:51.419378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:24.327 [2024-11-20 18:02:51.419389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.419420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:24.327 [2024-11-20 18:02:51.425099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.425255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:24.327 [2024-11-20 18:02:51.425302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.695 ms 00:28:24.327 [2024-11-20 18:02:51.425314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.425353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.425366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:24.327 [2024-11-20 18:02:51.425378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:28:24.327 [2024-11-20 18:02:51.425390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.425441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:24.327 [2024-11-20 18:02:51.425469] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:24.327 [2024-11-20 18:02:51.425509] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:24.327 [2024-11-20 18:02:51.425540] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:24.327 [2024-11-20 18:02:51.425640] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:24.327 [2024-11-20 18:02:51.425654] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:24.327 [2024-11-20 18:02:51.425670] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:24.327 [2024-11-20 18:02:51.425684] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:24.327 [2024-11-20 18:02:51.425698] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:24.327 [2024-11-20 18:02:51.425711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:24.327 [2024-11-20 18:02:51.425722] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:24.327 [2024-11-20 18:02:51.425738] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:24.327 [2024-11-20 18:02:51.425750] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:24.327 [2024-11-20 18:02:51.425761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.425786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:24.327 [2024-11-20 18:02:51.425798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:28:24.327 [2024-11-20 18:02:51.425810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.425887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.327 [2024-11-20 18:02:51.425899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:24.327 [2024-11-20 18:02:51.425911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:24.327 [2024-11-20 18:02:51.425922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.327 [2024-11-20 18:02:51.426030] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:24.327 [2024-11-20 18:02:51.426046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:24.327 [2024-11-20 18:02:51.426057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:24.327 [2024-11-20 18:02:51.426092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:24.327 [2024-11-20 18:02:51.426126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.327 [2024-11-20 18:02:51.426147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:24.327 [2024-11-20 18:02:51.426161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:24.327 [2024-11-20 18:02:51.426171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:24.327 [2024-11-20 18:02:51.426182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:24.327 [2024-11-20 18:02:51.426193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:24.327 [2024-11-20 18:02:51.426214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426225] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:24.327 [2024-11-20 18:02:51.426236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:24.327 [2024-11-20 18:02:51.426267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:24.327 [2024-11-20 18:02:51.426298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:24.327 [2024-11-20 18:02:51.426330] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426350] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:24.327 [2024-11-20 18:02:51.426360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:24.327 [2024-11-20 18:02:51.426389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.327 [2024-11-20 18:02:51.426409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:24.327 [2024-11-20 18:02:51.426418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:24.327 [2024-11-20 18:02:51.426428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:24.327 [2024-11-20 18:02:51.426438] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:24.327 [2024-11-20 18:02:51.426448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:24.327 [2024-11-20 18:02:51.426458] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426467] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:24.327 [2024-11-20 18:02:51.426477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:24.327 [2024-11-20 18:02:51.426487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426498] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:24.327 [2024-11-20 18:02:51.426510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:24.327 [2024-11-20 18:02:51.426521] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:24.327 [2024-11-20 18:02:51.426543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:24.327 [2024-11-20 18:02:51.426554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:24.327 [2024-11-20 18:02:51.426564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:24.327 [2024-11-20 18:02:51.426574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:24.327 [2024-11-20 18:02:51.426584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:24.327 [2024-11-20 18:02:51.426593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:24.327 [2024-11-20 18:02:51.426605] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:24.327 [2024-11-20 18:02:51.426618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.327 [2024-11-20 18:02:51.426636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:24.327 [2024-11-20 18:02:51.426647] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:24.327 [2024-11-20 18:02:51.426659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:24.327 [2024-11-20 18:02:51.426670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:24.327 [2024-11-20 18:02:51.426682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:24.327 [2024-11-20 18:02:51.426693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:24.328 [2024-11-20 18:02:51.426705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:24.328 [2024-11-20 18:02:51.426717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:24.328 [2024-11-20 18:02:51.426729] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:24.328 [2024-11-20 18:02:51.426740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426816] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:24.328 [2024-11-20 18:02:51.426827] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:24.328 [2024-11-20 18:02:51.426839] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426852] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:24.328 [2024-11-20 18:02:51.426863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:24.328 [2024-11-20 18:02:51.426875] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:24.328 [2024-11-20 18:02:51.426886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:24.328 [2024-11-20 18:02:51.426899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.328 [2024-11-20 18:02:51.426910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:24.328 [2024-11-20 18:02:51.426922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.928 ms 00:28:24.328 [2024-11-20 18:02:51.426933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.328 [2024-11-20 18:02:51.474827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.328 [2024-11-20 18:02:51.475009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:24.328 [2024-11-20 18:02:51.475033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.920 ms 00:28:24.328 [2024-11-20 18:02:51.475054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.328 [2024-11-20 18:02:51.475143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.328 [2024-11-20 18:02:51.475156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:24.328 [2024-11-20 18:02:51.475169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:28:24.328 [2024-11-20 18:02:51.475180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.552838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.552998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:24.588 [2024-11-20 18:02:51.553022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.719 ms 00:28:24.588 [2024-11-20 18:02:51.553034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.553110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.553130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:24.588 [2024-11-20 18:02:51.553141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:24.588 [2024-11-20 18:02:51.553152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.553960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.553977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:24.588 [2024-11-20 18:02:51.553989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:28:24.588 [2024-11-20 18:02:51.553999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.554134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.554149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:24.588 [2024-11-20 18:02:51.554167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:28:24.588 [2024-11-20 18:02:51.554179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.576586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.576722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:24.588 [2024-11-20 18:02:51.576759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.420 ms 00:28:24.588 [2024-11-20 18:02:51.576771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.597226] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:28:24.588 [2024-11-20 18:02:51.597390] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:24.588 [2024-11-20 18:02:51.597503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.597538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:24.588 [2024-11-20 18:02:51.597571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.616 ms 00:28:24.588 [2024-11-20 18:02:51.597646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.627467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.627599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:24.588 [2024-11-20 18:02:51.627734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.790 ms 00:28:24.588 [2024-11-20 18:02:51.627782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.645595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.645749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:24.588 [2024-11-20 18:02:51.645907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.775 ms 00:28:24.588 [2024-11-20 18:02:51.645946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.663358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.663479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:24.588 [2024-11-20 18:02:51.663564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.323 ms 00:28:24.588 [2024-11-20 18:02:51.663598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.664352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.664479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:24.588 [2024-11-20 18:02:51.664562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.635 ms 00:28:24.588 [2024-11-20 18:02:51.664597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.588 [2024-11-20 18:02:51.758466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.588 [2024-11-20 18:02:51.758693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:24.588 [2024-11-20 18:02:51.758851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.970 ms 00:28:24.588 [2024-11-20 18:02:51.758891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.769978] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:24.877 [2024-11-20 18:02:51.773366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.773511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:24.877 [2024-11-20 18:02:51.773587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.430 ms 00:28:24.877 [2024-11-20 18:02:51.773623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.773739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.773789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:24.877 [2024-11-20 18:02:51.773829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:28:24.877 [2024-11-20 18:02:51.773859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.775945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.776085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:24.877 [2024-11-20 18:02:51.776154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.950 ms 00:28:24.877 [2024-11-20 18:02:51.776190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.776251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.776285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:24.877 [2024-11-20 18:02:51.776316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:24.877 [2024-11-20 18:02:51.776346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.776417] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:24.877 [2024-11-20 18:02:51.776547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.776620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:24.877 [2024-11-20 18:02:51.776651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:28:24.877 [2024-11-20 18:02:51.776684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.812679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.812860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:24.877 [2024-11-20 18:02:51.813017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.007 ms 00:28:24.877 [2024-11-20 18:02:51.813056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.813162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:24.877 [2024-11-20 18:02:51.813296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:24.877 [2024-11-20 18:02:51.813367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:28:24.877 [2024-11-20 18:02:51.813399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:24.877 [2024-11-20 18:02:51.814990] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 431.624 ms, result 0 00:28:25.886  [2024-11-20T18:02:54.440Z] Copying: 1416/1048576 [kB] (1416 kBps) [2024-11-20T18:02:55.378Z] Copying: 7084/1048576 [kB] (5668 kBps) [2024-11-20T18:02:56.314Z] Copying: 38/1024 [MB] (31 MBps) [2024-11-20T18:02:57.252Z] Copying: 69/1024 [MB] (31 MBps) [2024-11-20T18:02:58.189Z] Copying: 100/1024 [MB] (31 MBps) [2024-11-20T18:02:59.127Z] Copying: 131/1024 [MB] (30 MBps) [2024-11-20T18:03:00.074Z] Copying: 163/1024 [MB] (31 MBps) [2024-11-20T18:03:01.453Z] Copying: 194/1024 [MB] (31 MBps) [2024-11-20T18:03:02.021Z] Copying: 227/1024 [MB] (32 MBps) [2024-11-20T18:03:03.400Z] Copying: 262/1024 [MB] (34 MBps) [2024-11-20T18:03:04.337Z] Copying: 296/1024 [MB] (33 MBps) [2024-11-20T18:03:05.274Z] Copying: 330/1024 [MB] (34 MBps) [2024-11-20T18:03:06.211Z] Copying: 364/1024 [MB] (33 MBps) [2024-11-20T18:03:07.149Z] Copying: 398/1024 [MB] (34 MBps) [2024-11-20T18:03:08.086Z] Copying: 433/1024 [MB] (34 MBps) [2024-11-20T18:03:09.024Z] Copying: 467/1024 [MB] (34 MBps) [2024-11-20T18:03:10.404Z] Copying: 501/1024 [MB] (34 MBps) [2024-11-20T18:03:11.366Z] Copying: 534/1024 [MB] (32 MBps) [2024-11-20T18:03:12.303Z] Copying: 567/1024 [MB] (33 MBps) [2024-11-20T18:03:13.239Z] Copying: 600/1024 [MB] (32 MBps) [2024-11-20T18:03:14.176Z] Copying: 633/1024 [MB] (32 MBps) [2024-11-20T18:03:15.111Z] Copying: 666/1024 [MB] (33 MBps) [2024-11-20T18:03:16.048Z] Copying: 700/1024 [MB] (34 MBps) [2024-11-20T18:03:17.426Z] Copying: 734/1024 [MB] (33 MBps) [2024-11-20T18:03:17.995Z] Copying: 768/1024 [MB] (33 MBps) [2024-11-20T18:03:19.386Z] Copying: 802/1024 [MB] (34 MBps) [2024-11-20T18:03:20.326Z] Copying: 836/1024 [MB] (34 MBps) [2024-11-20T18:03:21.264Z] Copying: 871/1024 [MB] (35 MBps) [2024-11-20T18:03:22.201Z] Copying: 904/1024 [MB] (32 MBps) [2024-11-20T18:03:23.139Z] Copying: 938/1024 [MB] (33 MBps) [2024-11-20T18:03:24.078Z] Copying: 972/1024 [MB] (33 MBps) [2024-11-20T18:03:24.646Z] Copying: 1005/1024 [MB] (33 MBps) [2024-11-20T18:03:26.604Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 18:03:26.048928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.428 [2024-11-20 18:03:26.049060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:59.428 [2024-11-20 18:03:26.049111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:28:59.428 [2024-11-20 18:03:26.049148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.428 [2024-11-20 18:03:26.049372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:59.428 [2024-11-20 18:03:26.061334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.428 [2024-11-20 18:03:26.061453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:59.428 [2024-11-20 18:03:26.061487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.889 ms 00:28:59.428 [2024-11-20 18:03:26.061510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.428 [2024-11-20 18:03:26.062002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.428 [2024-11-20 18:03:26.062042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:59.429 [2024-11-20 18:03:26.062065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:28:59.429 [2024-11-20 18:03:26.062088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.079469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.079781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:59.429 [2024-11-20 18:03:26.079809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.371 ms 00:28:59.429 [2024-11-20 18:03:26.079822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.084876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.084915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:59.429 [2024-11-20 18:03:26.084941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.014 ms 00:28:59.429 [2024-11-20 18:03:26.084952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.120727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.120782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:59.429 [2024-11-20 18:03:26.120799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.761 ms 00:28:59.429 [2024-11-20 18:03:26.120826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.141916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.142122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:59.429 [2024-11-20 18:03:26.142146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.082 ms 00:28:59.429 [2024-11-20 18:03:26.142159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.144473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.144512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:59.429 [2024-11-20 18:03:26.144526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.275 ms 00:28:59.429 [2024-11-20 18:03:26.144544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.180957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.180993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:59.429 [2024-11-20 18:03:26.181007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.453 ms 00:28:59.429 [2024-11-20 18:03:26.181017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.215644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.215681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:59.429 [2024-11-20 18:03:26.215706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.646 ms 00:28:59.429 [2024-11-20 18:03:26.215716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.250945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.250981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:59.429 [2024-11-20 18:03:26.250994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.250 ms 00:28:59.429 [2024-11-20 18:03:26.251005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.286145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.429 [2024-11-20 18:03:26.286182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:59.429 [2024-11-20 18:03:26.286196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.110 ms 00:28:59.429 [2024-11-20 18:03:26.286206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.429 [2024-11-20 18:03:26.286244] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:59.429 [2024-11-20 18:03:26.286261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:59.429 [2024-11-20 18:03:26.286275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:59.429 [2024-11-20 18:03:26.286287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:59.429 [2024-11-20 18:03:26.286928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.286996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:59.430 [2024-11-20 18:03:26.287434] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:59.430 [2024-11-20 18:03:26.287445] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a338d17b-0145-4b0b-9a6a-09252efa416c 00:28:59.430 [2024-11-20 18:03:26.287457] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:59.430 [2024-11-20 18:03:26.287468] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 172736 00:28:59.430 [2024-11-20 18:03:26.287482] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 170752 00:28:59.430 [2024-11-20 18:03:26.287493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0116 00:28:59.430 [2024-11-20 18:03:26.287504] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:59.430 [2024-11-20 18:03:26.287514] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:59.430 [2024-11-20 18:03:26.287525] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:59.430 [2024-11-20 18:03:26.287545] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:59.430 [2024-11-20 18:03:26.287554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:59.430 [2024-11-20 18:03:26.287564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.430 [2024-11-20 18:03:26.287575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:59.430 [2024-11-20 18:03:26.287585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.324 ms 00:28:59.430 [2024-11-20 18:03:26.287596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.306633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.430 [2024-11-20 18:03:26.306668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:59.430 [2024-11-20 18:03:26.306680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.033 ms 00:28:59.430 [2024-11-20 18:03:26.306691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.307208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:59.430 [2024-11-20 18:03:26.307223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:59.430 [2024-11-20 18:03:26.307234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.496 ms 00:28:59.430 [2024-11-20 18:03:26.307244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.358072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.358109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:59.430 [2024-11-20 18:03:26.358122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.358133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.358183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.358195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:59.430 [2024-11-20 18:03:26.358206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.358216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.358305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.358320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:59.430 [2024-11-20 18:03:26.358332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.358342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.358359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.358370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:59.430 [2024-11-20 18:03:26.358380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.358391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.478170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.478232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:59.430 [2024-11-20 18:03:26.478248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.478260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.578162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.578224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:59.430 [2024-11-20 18:03:26.578239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.578250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.578348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.578366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:59.430 [2024-11-20 18:03:26.578378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.578389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.578433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.578446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:59.430 [2024-11-20 18:03:26.578456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.430 [2024-11-20 18:03:26.578468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.430 [2024-11-20 18:03:26.578565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.430 [2024-11-20 18:03:26.578585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:59.430 [2024-11-20 18:03:26.578595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.431 [2024-11-20 18:03:26.578606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.431 [2024-11-20 18:03:26.578646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.431 [2024-11-20 18:03:26.578659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:59.431 [2024-11-20 18:03:26.578671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.431 [2024-11-20 18:03:26.578682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.431 [2024-11-20 18:03:26.578721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.431 [2024-11-20 18:03:26.578732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:59.431 [2024-11-20 18:03:26.578747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.431 [2024-11-20 18:03:26.578757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.431 [2024-11-20 18:03:26.578830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:59.431 [2024-11-20 18:03:26.578860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:59.431 [2024-11-20 18:03:26.578871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:59.431 [2024-11-20 18:03:26.578883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:59.431 [2024-11-20 18:03:26.579009] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.959 ms, result 0 00:29:00.810 00:29:00.810 00:29:00.810 18:03:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:02.714 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:29:02.714 18:03:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:02.714 [2024-11-20 18:03:29.476173] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:29:02.714 [2024-11-20 18:03:29.476431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83046 ] 00:29:02.714 [2024-11-20 18:03:29.656855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.714 [2024-11-20 18:03:29.769372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.974 [2024-11-20 18:03:30.129118] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:02.974 [2024-11-20 18:03:30.129179] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:29:03.234 [2024-11-20 18:03:30.290410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.234 [2024-11-20 18:03:30.290637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:29:03.234 [2024-11-20 18:03:30.290684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:03.234 [2024-11-20 18:03:30.290695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.234 [2024-11-20 18:03:30.290752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.234 [2024-11-20 18:03:30.290764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:03.234 [2024-11-20 18:03:30.290779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:29:03.234 [2024-11-20 18:03:30.290809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.290833] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:29:03.235 [2024-11-20 18:03:30.291847] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:29:03.235 [2024-11-20 18:03:30.291872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.291884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:03.235 [2024-11-20 18:03:30.291895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.046 ms 00:29:03.235 [2024-11-20 18:03:30.291906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.293365] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:29:03.235 [2024-11-20 18:03:30.312329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.312493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:29:03.235 [2024-11-20 18:03:30.312516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.995 ms 00:29:03.235 [2024-11-20 18:03:30.312527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.312592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.312605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:29:03.235 [2024-11-20 18:03:30.312616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:29:03.235 [2024-11-20 18:03:30.312627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.319542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.319721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:03.235 [2024-11-20 18:03:30.319742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.852 ms 00:29:03.235 [2024-11-20 18:03:30.319760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.319862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.319877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:03.235 [2024-11-20 18:03:30.319888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:29:03.235 [2024-11-20 18:03:30.319899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.319944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.319956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:29:03.235 [2024-11-20 18:03:30.319968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:29:03.235 [2024-11-20 18:03:30.319978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.320008] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:29:03.235 [2024-11-20 18:03:30.324583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.324612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:03.235 [2024-11-20 18:03:30.324625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:29:03.235 [2024-11-20 18:03:30.324639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.324669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.324679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:29:03.235 [2024-11-20 18:03:30.324690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:29:03.235 [2024-11-20 18:03:30.324700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.324754] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:29:03.235 [2024-11-20 18:03:30.324812] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:29:03.235 [2024-11-20 18:03:30.324849] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:29:03.235 [2024-11-20 18:03:30.324871] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:29:03.235 [2024-11-20 18:03:30.324958] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:29:03.235 [2024-11-20 18:03:30.324973] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:29:03.235 [2024-11-20 18:03:30.324987] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:29:03.235 [2024-11-20 18:03:30.325000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325012] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325024] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:29:03.235 [2024-11-20 18:03:30.325035] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:29:03.235 [2024-11-20 18:03:30.325046] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:29:03.235 [2024-11-20 18:03:30.325059] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:29:03.235 [2024-11-20 18:03:30.325071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.325083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:29:03.235 [2024-11-20 18:03:30.325094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.320 ms 00:29:03.235 [2024-11-20 18:03:30.325104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.325175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.235 [2024-11-20 18:03:30.325187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:29:03.235 [2024-11-20 18:03:30.325199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:29:03.235 [2024-11-20 18:03:30.325209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.235 [2024-11-20 18:03:30.325306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:29:03.235 [2024-11-20 18:03:30.325321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:29:03.235 [2024-11-20 18:03:30.325332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:29:03.235 [2024-11-20 18:03:30.325364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:29:03.235 [2024-11-20 18:03:30.325404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.235 [2024-11-20 18:03:30.325425] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:29:03.235 [2024-11-20 18:03:30.325435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:29:03.235 [2024-11-20 18:03:30.325444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:29:03.235 [2024-11-20 18:03:30.325454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:29:03.235 [2024-11-20 18:03:30.325464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:29:03.235 [2024-11-20 18:03:30.325483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:29:03.235 [2024-11-20 18:03:30.325502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:29:03.235 [2024-11-20 18:03:30.325532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:29:03.235 [2024-11-20 18:03:30.325559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:29:03.235 [2024-11-20 18:03:30.325588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:29:03.235 [2024-11-20 18:03:30.325614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:29:03.235 [2024-11-20 18:03:30.325641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.235 [2024-11-20 18:03:30.325660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:29:03.235 [2024-11-20 18:03:30.325669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:29:03.235 [2024-11-20 18:03:30.325677] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:29:03.235 [2024-11-20 18:03:30.325686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:29:03.235 [2024-11-20 18:03:30.325695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:29:03.235 [2024-11-20 18:03:30.325704] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:29:03.235 [2024-11-20 18:03:30.325721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:29:03.235 [2024-11-20 18:03:30.325731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.235 [2024-11-20 18:03:30.325740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:29:03.235 [2024-11-20 18:03:30.325750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:29:03.235 [2024-11-20 18:03:30.325760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:29:03.235 [2024-11-20 18:03:30.325783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:29:03.236 [2024-11-20 18:03:30.325793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:29:03.236 [2024-11-20 18:03:30.325803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:29:03.236 [2024-11-20 18:03:30.325813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:29:03.236 [2024-11-20 18:03:30.325823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:29:03.236 [2024-11-20 18:03:30.325832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:29:03.236 [2024-11-20 18:03:30.325842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:29:03.236 [2024-11-20 18:03:30.325853] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:29:03.236 [2024-11-20 18:03:30.325865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.325876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:29:03.236 [2024-11-20 18:03:30.325887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:29:03.236 [2024-11-20 18:03:30.325897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:29:03.236 [2024-11-20 18:03:30.325907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:29:03.236 [2024-11-20 18:03:30.325917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:29:03.236 [2024-11-20 18:03:30.325927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:29:03.236 [2024-11-20 18:03:30.325937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:29:03.236 [2024-11-20 18:03:30.325948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:29:03.236 [2024-11-20 18:03:30.325957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:29:03.236 [2024-11-20 18:03:30.325968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.325978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.325989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.325999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.326009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:29:03.236 [2024-11-20 18:03:30.326019] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:29:03.236 [2024-11-20 18:03:30.326035] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.326045] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:03.236 [2024-11-20 18:03:30.326055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:29:03.236 [2024-11-20 18:03:30.326066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:29:03.236 [2024-11-20 18:03:30.326079] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:29:03.236 [2024-11-20 18:03:30.326092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.236 [2024-11-20 18:03:30.326102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:29:03.236 [2024-11-20 18:03:30.326113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.842 ms 00:29:03.236 [2024-11-20 18:03:30.326124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.236 [2024-11-20 18:03:30.367569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.236 [2024-11-20 18:03:30.367627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:03.236 [2024-11-20 18:03:30.367644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.466 ms 00:29:03.236 [2024-11-20 18:03:30.367655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.236 [2024-11-20 18:03:30.367752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.236 [2024-11-20 18:03:30.367765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:29:03.236 [2024-11-20 18:03:30.367790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:29:03.236 [2024-11-20 18:03:30.367800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.427007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.427050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:03.496 [2024-11-20 18:03:30.427064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.216 ms 00:29:03.496 [2024-11-20 18:03:30.427075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.427125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.427136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:03.496 [2024-11-20 18:03:30.427152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:29:03.496 [2024-11-20 18:03:30.427162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.427670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.427685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:03.496 [2024-11-20 18:03:30.427697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.432 ms 00:29:03.496 [2024-11-20 18:03:30.427707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.427841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.427856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:03.496 [2024-11-20 18:03:30.427867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:29:03.496 [2024-11-20 18:03:30.427884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.446940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.447123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:03.496 [2024-11-20 18:03:30.447152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.066 ms 00:29:03.496 [2024-11-20 18:03:30.447163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.466108] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:29:03.496 [2024-11-20 18:03:30.466251] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:29:03.496 [2024-11-20 18:03:30.466271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.466283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:29:03.496 [2024-11-20 18:03:30.466294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.016 ms 00:29:03.496 [2024-11-20 18:03:30.466305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.496423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.496462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:29:03.496 [2024-11-20 18:03:30.496476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.062 ms 00:29:03.496 [2024-11-20 18:03:30.496488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.515073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.515109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:29:03.496 [2024-11-20 18:03:30.515122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.567 ms 00:29:03.496 [2024-11-20 18:03:30.515132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.533585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.533622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:29:03.496 [2024-11-20 18:03:30.533637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:29:03.496 [2024-11-20 18:03:30.533647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.534462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.534497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:29:03.496 [2024-11-20 18:03:30.534509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:29:03.496 [2024-11-20 18:03:30.534524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.620867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.620921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:29:03.496 [2024-11-20 18:03:30.620944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.461 ms 00:29:03.496 [2024-11-20 18:03:30.620956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.631841] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:29:03.496 [2024-11-20 18:03:30.634623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.634656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:29:03.496 [2024-11-20 18:03:30.634672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.639 ms 00:29:03.496 [2024-11-20 18:03:30.634684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.634790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.634805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:29:03.496 [2024-11-20 18:03:30.634817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:29:03.496 [2024-11-20 18:03:30.634832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.635718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.635752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:29:03.496 [2024-11-20 18:03:30.635764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:29:03.496 [2024-11-20 18:03:30.635784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.635810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.635823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:29:03.496 [2024-11-20 18:03:30.635833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:29:03.496 [2024-11-20 18:03:30.635843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.496 [2024-11-20 18:03:30.635882] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:29:03.496 [2024-11-20 18:03:30.635894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.496 [2024-11-20 18:03:30.635905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:29:03.496 [2024-11-20 18:03:30.635915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:29:03.496 [2024-11-20 18:03:30.635925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.756 [2024-11-20 18:03:30.672829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.756 [2024-11-20 18:03:30.672870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:29:03.756 [2024-11-20 18:03:30.672886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.944 ms 00:29:03.756 [2024-11-20 18:03:30.672902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.756 [2024-11-20 18:03:30.672976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:03.756 [2024-11-20 18:03:30.672989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:29:03.756 [2024-11-20 18:03:30.673000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:29:03.756 [2024-11-20 18:03:30.673011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:03.756 [2024-11-20 18:03:30.674151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.869 ms, result 0 00:29:05.132  [2024-11-20T18:03:33.246Z] Copying: 27/1024 [MB] (27 MBps) [2024-11-20T18:03:34.184Z] Copying: 54/1024 [MB] (27 MBps) [2024-11-20T18:03:35.121Z] Copying: 81/1024 [MB] (27 MBps) [2024-11-20T18:03:36.056Z] Copying: 108/1024 [MB] (27 MBps) [2024-11-20T18:03:36.991Z] Copying: 135/1024 [MB] (26 MBps) [2024-11-20T18:03:37.925Z] Copying: 163/1024 [MB] (27 MBps) [2024-11-20T18:03:39.302Z] Copying: 189/1024 [MB] (26 MBps) [2024-11-20T18:03:40.239Z] Copying: 217/1024 [MB] (27 MBps) [2024-11-20T18:03:41.177Z] Copying: 244/1024 [MB] (27 MBps) [2024-11-20T18:03:42.115Z] Copying: 271/1024 [MB] (27 MBps) [2024-11-20T18:03:43.052Z] Copying: 297/1024 [MB] (25 MBps) [2024-11-20T18:03:44.017Z] Copying: 320/1024 [MB] (23 MBps) [2024-11-20T18:03:45.001Z] Copying: 345/1024 [MB] (24 MBps) [2024-11-20T18:03:45.938Z] Copying: 370/1024 [MB] (25 MBps) [2024-11-20T18:03:46.876Z] Copying: 394/1024 [MB] (24 MBps) [2024-11-20T18:03:48.254Z] Copying: 419/1024 [MB] (24 MBps) [2024-11-20T18:03:49.191Z] Copying: 445/1024 [MB] (25 MBps) [2024-11-20T18:03:50.135Z] Copying: 470/1024 [MB] (25 MBps) [2024-11-20T18:03:51.072Z] Copying: 495/1024 [MB] (25 MBps) [2024-11-20T18:03:52.009Z] Copying: 521/1024 [MB] (25 MBps) [2024-11-20T18:03:52.946Z] Copying: 546/1024 [MB] (24 MBps) [2024-11-20T18:03:53.883Z] Copying: 571/1024 [MB] (25 MBps) [2024-11-20T18:03:55.262Z] Copying: 596/1024 [MB] (25 MBps) [2024-11-20T18:03:56.198Z] Copying: 622/1024 [MB] (25 MBps) [2024-11-20T18:03:57.136Z] Copying: 648/1024 [MB] (25 MBps) [2024-11-20T18:03:58.073Z] Copying: 673/1024 [MB] (25 MBps) [2024-11-20T18:03:59.010Z] Copying: 699/1024 [MB] (25 MBps) [2024-11-20T18:03:59.946Z] Copying: 724/1024 [MB] (25 MBps) [2024-11-20T18:04:00.883Z] Copying: 749/1024 [MB] (25 MBps) [2024-11-20T18:04:01.862Z] Copying: 776/1024 [MB] (26 MBps) [2024-11-20T18:04:03.242Z] Copying: 802/1024 [MB] (25 MBps) [2024-11-20T18:04:04.181Z] Copying: 826/1024 [MB] (24 MBps) [2024-11-20T18:04:05.119Z] Copying: 851/1024 [MB] (25 MBps) [2024-11-20T18:04:06.058Z] Copying: 877/1024 [MB] (26 MBps) [2024-11-20T18:04:06.994Z] Copying: 902/1024 [MB] (25 MBps) [2024-11-20T18:04:07.931Z] Copying: 927/1024 [MB] (24 MBps) [2024-11-20T18:04:08.869Z] Copying: 950/1024 [MB] (23 MBps) [2024-11-20T18:04:10.246Z] Copying: 974/1024 [MB] (23 MBps) [2024-11-20T18:04:11.186Z] Copying: 998/1024 [MB] (23 MBps) [2024-11-20T18:04:11.186Z] Copying: 1022/1024 [MB] (24 MBps) [2024-11-20T18:04:11.186Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-11-20 18:04:10.917588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.010 [2024-11-20 18:04:10.917673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:44.010 [2024-11-20 18:04:10.917698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:44.010 [2024-11-20 18:04:10.917714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.010 [2024-11-20 18:04:10.917745] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:44.010 [2024-11-20 18:04:10.924655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.010 [2024-11-20 18:04:10.924703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:44.010 [2024-11-20 18:04:10.924731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.877 ms 00:29:44.010 [2024-11-20 18:04:10.924746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.010 [2024-11-20 18:04:10.925035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.925055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:44.011 [2024-11-20 18:04:10.925071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:29:44.011 [2024-11-20 18:04:10.925086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:10.929015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.929059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:44.011 [2024-11-20 18:04:10.929075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.915 ms 00:29:44.011 [2024-11-20 18:04:10.929089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:10.934641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.934673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:44.011 [2024-11-20 18:04:10.934686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.529 ms 00:29:44.011 [2024-11-20 18:04:10.934697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:10.971508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.971678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:44.011 [2024-11-20 18:04:10.971699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.775 ms 00:29:44.011 [2024-11-20 18:04:10.971709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:10.993097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.993246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:44.011 [2024-11-20 18:04:10.993267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.329 ms 00:29:44.011 [2024-11-20 18:04:10.993277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:10.995932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:10.995975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:44.011 [2024-11-20 18:04:10.995988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.574 ms 00:29:44.011 [2024-11-20 18:04:10.995998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:11.030226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:11.030369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:44.011 [2024-11-20 18:04:11.030390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.266 ms 00:29:44.011 [2024-11-20 18:04:11.030399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:11.064148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:11.064195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:44.011 [2024-11-20 18:04:11.064207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.712 ms 00:29:44.011 [2024-11-20 18:04:11.064216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:11.098287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:11.098321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:44.011 [2024-11-20 18:04:11.098334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.087 ms 00:29:44.011 [2024-11-20 18:04:11.098344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:11.131519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.011 [2024-11-20 18:04:11.131552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:44.011 [2024-11-20 18:04:11.131565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.152 ms 00:29:44.011 [2024-11-20 18:04:11.131574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.011 [2024-11-20 18:04:11.131609] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:44.011 [2024-11-20 18:04:11.131625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:44.011 [2024-11-20 18:04:11.131645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:44.011 [2024-11-20 18:04:11.131656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:44.011 [2024-11-20 18:04:11.131998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:44.012 [2024-11-20 18:04:11.132589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:44.013 [2024-11-20 18:04:11.132655] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:44.013 [2024-11-20 18:04:11.132668] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a338d17b-0145-4b0b-9a6a-09252efa416c 00:29:44.013 [2024-11-20 18:04:11.132678] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:44.013 [2024-11-20 18:04:11.132688] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:44.013 [2024-11-20 18:04:11.132698] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:44.013 [2024-11-20 18:04:11.132708] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:44.013 [2024-11-20 18:04:11.132717] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:44.013 [2024-11-20 18:04:11.132727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:44.013 [2024-11-20 18:04:11.132747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:44.013 [2024-11-20 18:04:11.132755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:44.013 [2024-11-20 18:04:11.132763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:44.013 [2024-11-20 18:04:11.133033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.013 [2024-11-20 18:04:11.133066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:44.013 [2024-11-20 18:04:11.133096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.426 ms 00:29:44.013 [2024-11-20 18:04:11.133124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.013 [2024-11-20 18:04:11.152309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.013 [2024-11-20 18:04:11.152437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:44.013 [2024-11-20 18:04:11.152505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.142 ms 00:29:44.013 [2024-11-20 18:04:11.152538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.013 [2024-11-20 18:04:11.153106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:44.013 [2024-11-20 18:04:11.153150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:44.013 [2024-11-20 18:04:11.153234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:29:44.013 [2024-11-20 18:04:11.153267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.206216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.206356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:44.304 [2024-11-20 18:04:11.206429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.206464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.206542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.206574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:44.304 [2024-11-20 18:04:11.206611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.206641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.206734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.206929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:44.304 [2024-11-20 18:04:11.206996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.207024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.207063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.207093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:44.304 [2024-11-20 18:04:11.207121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.207154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.334298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.334503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:44.304 [2024-11-20 18:04:11.334529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.334541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.438676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.438741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:44.304 [2024-11-20 18:04:11.438764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.438789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.438905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.438919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:44.304 [2024-11-20 18:04:11.438932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.438942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.438991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.439003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:44.304 [2024-11-20 18:04:11.439015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.439026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.439149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.439164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:44.304 [2024-11-20 18:04:11.439175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.439186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.439223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.439236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:44.304 [2024-11-20 18:04:11.439248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.439258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.439309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.439321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:44.304 [2024-11-20 18:04:11.439332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.439343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.439404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:44.304 [2024-11-20 18:04:11.439417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:44.304 [2024-11-20 18:04:11.439427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:44.304 [2024-11-20 18:04:11.439438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:44.304 [2024-11-20 18:04:11.439583] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 522.807 ms, result 0 00:29:45.684 00:29:45.684 00:29:45.684 18:04:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:47.590 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81180 00:29:47.590 Process with pid 81180 is not found 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81180 ']' 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81180 00:29:47.590 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81180) - No such process 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81180 is not found' 00:29:47.590 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:47.850 Remove shared memory files 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:47.850 ************************************ 00:29:47.850 END TEST ftl_dirty_shutdown 00:29:47.850 ************************************ 00:29:47.850 00:29:47.850 real 3m45.096s 00:29:47.850 user 4m10.862s 00:29:47.850 sys 0m40.412s 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.850 18:04:14 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:47.850 18:04:15 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:47.850 18:04:15 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:47.850 18:04:15 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.850 18:04:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:48.110 ************************************ 00:29:48.110 START TEST ftl_upgrade_shutdown 00:29:48.110 ************************************ 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:48.110 * Looking for test storage... 00:29:48.110 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:48.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.110 --rc genhtml_branch_coverage=1 00:29:48.110 --rc genhtml_function_coverage=1 00:29:48.110 --rc genhtml_legend=1 00:29:48.110 --rc geninfo_all_blocks=1 00:29:48.110 --rc geninfo_unexecuted_blocks=1 00:29:48.110 00:29:48.110 ' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:48.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.110 --rc genhtml_branch_coverage=1 00:29:48.110 --rc genhtml_function_coverage=1 00:29:48.110 --rc genhtml_legend=1 00:29:48.110 --rc geninfo_all_blocks=1 00:29:48.110 --rc geninfo_unexecuted_blocks=1 00:29:48.110 00:29:48.110 ' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:48.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.110 --rc genhtml_branch_coverage=1 00:29:48.110 --rc genhtml_function_coverage=1 00:29:48.110 --rc genhtml_legend=1 00:29:48.110 --rc geninfo_all_blocks=1 00:29:48.110 --rc geninfo_unexecuted_blocks=1 00:29:48.110 00:29:48.110 ' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:48.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.110 --rc genhtml_branch_coverage=1 00:29:48.110 --rc genhtml_function_coverage=1 00:29:48.110 --rc genhtml_legend=1 00:29:48.110 --rc geninfo_all_blocks=1 00:29:48.110 --rc geninfo_unexecuted_blocks=1 00:29:48.110 00:29:48.110 ' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:48.110 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.111 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83572 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83572 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83572 ']' 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.370 18:04:15 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:48.370 [2024-11-20 18:04:15.400364] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:29:48.370 [2024-11-20 18:04:15.400492] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83572 ] 00:29:48.630 [2024-11-20 18:04:15.581796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.630 [2024-11-20 18:04:15.714491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:49.624 18:04:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:49.882 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:50.140 { 00:29:50.140 "name": "basen1", 00:29:50.140 "aliases": [ 00:29:50.140 "ed58609e-3948-46e1-abbf-44d90fb9036d" 00:29:50.140 ], 00:29:50.140 "product_name": "NVMe disk", 00:29:50.140 "block_size": 4096, 00:29:50.140 "num_blocks": 1310720, 00:29:50.140 "uuid": "ed58609e-3948-46e1-abbf-44d90fb9036d", 00:29:50.140 "numa_id": -1, 00:29:50.140 "assigned_rate_limits": { 00:29:50.140 "rw_ios_per_sec": 0, 00:29:50.140 "rw_mbytes_per_sec": 0, 00:29:50.140 "r_mbytes_per_sec": 0, 00:29:50.140 "w_mbytes_per_sec": 0 00:29:50.140 }, 00:29:50.140 "claimed": true, 00:29:50.140 "claim_type": "read_many_write_one", 00:29:50.140 "zoned": false, 00:29:50.140 "supported_io_types": { 00:29:50.140 "read": true, 00:29:50.140 "write": true, 00:29:50.140 "unmap": true, 00:29:50.140 "flush": true, 00:29:50.140 "reset": true, 00:29:50.140 "nvme_admin": true, 00:29:50.140 "nvme_io": true, 00:29:50.140 "nvme_io_md": false, 00:29:50.140 "write_zeroes": true, 00:29:50.140 "zcopy": false, 00:29:50.140 "get_zone_info": false, 00:29:50.140 "zone_management": false, 00:29:50.140 "zone_append": false, 00:29:50.140 "compare": true, 00:29:50.140 "compare_and_write": false, 00:29:50.140 "abort": true, 00:29:50.140 "seek_hole": false, 00:29:50.140 "seek_data": false, 00:29:50.140 "copy": true, 00:29:50.140 "nvme_iov_md": false 00:29:50.140 }, 00:29:50.140 "driver_specific": { 00:29:50.140 "nvme": [ 00:29:50.140 { 00:29:50.140 "pci_address": "0000:00:11.0", 00:29:50.140 "trid": { 00:29:50.140 "trtype": "PCIe", 00:29:50.140 "traddr": "0000:00:11.0" 00:29:50.140 }, 00:29:50.140 "ctrlr_data": { 00:29:50.140 "cntlid": 0, 00:29:50.140 "vendor_id": "0x1b36", 00:29:50.140 "model_number": "QEMU NVMe Ctrl", 00:29:50.140 "serial_number": "12341", 00:29:50.140 "firmware_revision": "8.0.0", 00:29:50.140 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:50.140 "oacs": { 00:29:50.140 "security": 0, 00:29:50.140 "format": 1, 00:29:50.140 "firmware": 0, 00:29:50.140 "ns_manage": 1 00:29:50.140 }, 00:29:50.140 "multi_ctrlr": false, 00:29:50.140 "ana_reporting": false 00:29:50.140 }, 00:29:50.140 "vs": { 00:29:50.140 "nvme_version": "1.4" 00:29:50.140 }, 00:29:50.140 "ns_data": { 00:29:50.140 "id": 1, 00:29:50.140 "can_share": false 00:29:50.140 } 00:29:50.140 } 00:29:50.140 ], 00:29:50.140 "mp_policy": "active_passive" 00:29:50.140 } 00:29:50.140 } 00:29:50.140 ]' 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:50.140 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:50.399 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:50.399 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:50.399 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f9547cfc-9a37-410f-9e03-0dbf21a9dc77 00:29:50.399 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:50.399 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9547cfc-9a37-410f-9e03-0dbf21a9dc77 00:29:50.657 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:50.916 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=5c35f0c6-9a77-4d87-aafa-1f917d6b2195 00:29:50.916 18:04:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 5c35f0c6-9a77-4d87-aafa-1f917d6b2195 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=d951518b-94e2-4665-86fe-d7a116864f11 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z d951518b-94e2-4665-86fe-d7a116864f11 ]] 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 d951518b-94e2-4665-86fe-d7a116864f11 5120 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=d951518b-94e2-4665-86fe-d7a116864f11 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size d951518b-94e2-4665-86fe-d7a116864f11 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=d951518b-94e2-4665-86fe-d7a116864f11 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:51.175 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b d951518b-94e2-4665-86fe-d7a116864f11 00:29:51.433 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:51.433 { 00:29:51.433 "name": "d951518b-94e2-4665-86fe-d7a116864f11", 00:29:51.433 "aliases": [ 00:29:51.433 "lvs/basen1p0" 00:29:51.433 ], 00:29:51.433 "product_name": "Logical Volume", 00:29:51.433 "block_size": 4096, 00:29:51.433 "num_blocks": 5242880, 00:29:51.433 "uuid": "d951518b-94e2-4665-86fe-d7a116864f11", 00:29:51.433 "assigned_rate_limits": { 00:29:51.433 "rw_ios_per_sec": 0, 00:29:51.433 "rw_mbytes_per_sec": 0, 00:29:51.433 "r_mbytes_per_sec": 0, 00:29:51.433 "w_mbytes_per_sec": 0 00:29:51.433 }, 00:29:51.433 "claimed": false, 00:29:51.433 "zoned": false, 00:29:51.433 "supported_io_types": { 00:29:51.433 "read": true, 00:29:51.433 "write": true, 00:29:51.433 "unmap": true, 00:29:51.433 "flush": false, 00:29:51.433 "reset": true, 00:29:51.433 "nvme_admin": false, 00:29:51.433 "nvme_io": false, 00:29:51.433 "nvme_io_md": false, 00:29:51.433 "write_zeroes": true, 00:29:51.433 "zcopy": false, 00:29:51.433 "get_zone_info": false, 00:29:51.433 "zone_management": false, 00:29:51.433 "zone_append": false, 00:29:51.433 "compare": false, 00:29:51.433 "compare_and_write": false, 00:29:51.433 "abort": false, 00:29:51.433 "seek_hole": true, 00:29:51.433 "seek_data": true, 00:29:51.433 "copy": false, 00:29:51.433 "nvme_iov_md": false 00:29:51.433 }, 00:29:51.433 "driver_specific": { 00:29:51.433 "lvol": { 00:29:51.433 "lvol_store_uuid": "5c35f0c6-9a77-4d87-aafa-1f917d6b2195", 00:29:51.433 "base_bdev": "basen1", 00:29:51.433 "thin_provision": true, 00:29:51.433 "num_allocated_clusters": 0, 00:29:51.433 "snapshot": false, 00:29:51.433 "clone": false, 00:29:51.433 "esnap_clone": false 00:29:51.433 } 00:29:51.433 } 00:29:51.433 } 00:29:51.433 ]' 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:51.434 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:51.692 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:51.692 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:51.692 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:51.951 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:51.951 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:51.951 18:04:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d d951518b-94e2-4665-86fe-d7a116864f11 -c cachen1p0 --l2p_dram_limit 2 00:29:51.951 [2024-11-20 18:04:19.120739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.951 [2024-11-20 18:04:19.120808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:51.951 [2024-11-20 18:04:19.120832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:51.951 [2024-11-20 18:04:19.120844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.951 [2024-11-20 18:04:19.120928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.951 [2024-11-20 18:04:19.120942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:51.951 [2024-11-20 18:04:19.120957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:29:51.951 [2024-11-20 18:04:19.120968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.951 [2024-11-20 18:04:19.120994] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:51.951 [2024-11-20 18:04:19.122073] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:51.951 [2024-11-20 18:04:19.122116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:51.951 [2024-11-20 18:04:19.122129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:51.951 [2024-11-20 18:04:19.122144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.125 ms 00:29:51.951 [2024-11-20 18:04:19.122155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:51.951 [2024-11-20 18:04:19.122235] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 2709fbf5-d0b9-4ebc-bb91-2e95d7678b7f 00:29:51.951 [2024-11-20 18:04:19.124548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.124732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:52.258 [2024-11-20 18:04:19.124755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:29:52.258 [2024-11-20 18:04:19.124781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.138694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.138905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:52.258 [2024-11-20 18:04:19.138929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.855 ms 00:29:52.258 [2024-11-20 18:04:19.138944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.138998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.139015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:52.258 [2024-11-20 18:04:19.139027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:29:52.258 [2024-11-20 18:04:19.139045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.139104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.139121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:52.258 [2024-11-20 18:04:19.139133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:29:52.258 [2024-11-20 18:04:19.139153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.139178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:52.258 [2024-11-20 18:04:19.144902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.144935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:52.258 [2024-11-20 18:04:19.144954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.737 ms 00:29:52.258 [2024-11-20 18:04:19.144964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.144996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.145007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:52.258 [2024-11-20 18:04:19.145021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:52.258 [2024-11-20 18:04:19.145033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.145069] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:52.258 [2024-11-20 18:04:19.145203] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:52.258 [2024-11-20 18:04:19.145226] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:52.258 [2024-11-20 18:04:19.145241] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:52.258 [2024-11-20 18:04:19.145259] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:52.258 [2024-11-20 18:04:19.145272] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:52.258 [2024-11-20 18:04:19.145287] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:52.258 [2024-11-20 18:04:19.145298] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:52.258 [2024-11-20 18:04:19.145315] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:52.258 [2024-11-20 18:04:19.145326] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:52.258 [2024-11-20 18:04:19.145341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.145352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:52.258 [2024-11-20 18:04:19.145366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:29:52.258 [2024-11-20 18:04:19.145386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.145479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.145491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:52.258 [2024-11-20 18:04:19.145507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:29:52.258 [2024-11-20 18:04:19.145530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.145636] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:52.258 [2024-11-20 18:04:19.145651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:52.258 [2024-11-20 18:04:19.145666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:52.258 [2024-11-20 18:04:19.145678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145693] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:52.258 [2024-11-20 18:04:19.145702] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145716] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:52.258 [2024-11-20 18:04:19.145727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:52.258 [2024-11-20 18:04:19.145739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:52.258 [2024-11-20 18:04:19.145749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:52.258 [2024-11-20 18:04:19.145772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:52.258 [2024-11-20 18:04:19.145802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:52.258 [2024-11-20 18:04:19.145827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:52.258 [2024-11-20 18:04:19.145837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:52.258 [2024-11-20 18:04:19.145863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:52.258 [2024-11-20 18:04:19.145877] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.145887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:52.258 [2024-11-20 18:04:19.145900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:52.258 [2024-11-20 18:04:19.145932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.258 [2024-11-20 18:04:19.145946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:52.258 [2024-11-20 18:04:19.145960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:52.258 [2024-11-20 18:04:19.145972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.258 [2024-11-20 18:04:19.145982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:52.258 [2024-11-20 18:04:19.145995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:52.258 [2024-11-20 18:04:19.146005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.258 [2024-11-20 18:04:19.146018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:52.258 [2024-11-20 18:04:19.146028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:52.258 [2024-11-20 18:04:19.146041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:52.258 [2024-11-20 18:04:19.146051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:52.258 [2024-11-20 18:04:19.146067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:52.258 [2024-11-20 18:04:19.146082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:52.258 [2024-11-20 18:04:19.146104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:52.258 [2024-11-20 18:04:19.146116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:52.258 [2024-11-20 18:04:19.146139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146148] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:52.258 [2024-11-20 18:04:19.146171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:52.258 [2024-11-20 18:04:19.146183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146192] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:52.258 [2024-11-20 18:04:19.146206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:52.258 [2024-11-20 18:04:19.146216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:52.258 [2024-11-20 18:04:19.146231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:52.258 [2024-11-20 18:04:19.146243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:52.258 [2024-11-20 18:04:19.146259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:52.258 [2024-11-20 18:04:19.146268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:52.258 [2024-11-20 18:04:19.146282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:52.258 [2024-11-20 18:04:19.146292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:52.258 [2024-11-20 18:04:19.146304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:52.258 [2024-11-20 18:04:19.146319] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:52.258 [2024-11-20 18:04:19.146336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:52.258 [2024-11-20 18:04:19.146368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146393] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:52.258 [2024-11-20 18:04:19.146404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:52.258 [2024-11-20 18:04:19.146418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:52.258 [2024-11-20 18:04:19.146429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:52.258 [2024-11-20 18:04:19.146443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146471] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:52.258 [2024-11-20 18:04:19.146544] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:52.258 [2024-11-20 18:04:19.146559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:52.258 [2024-11-20 18:04:19.146584] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:52.258 [2024-11-20 18:04:19.146594] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:52.258 [2024-11-20 18:04:19.146607] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:52.258 [2024-11-20 18:04:19.146619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:52.258 [2024-11-20 18:04:19.146634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:52.258 [2024-11-20 18:04:19.146645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.047 ms 00:29:52.258 [2024-11-20 18:04:19.146658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:52.258 [2024-11-20 18:04:19.146698] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:52.258 [2024-11-20 18:04:19.146719] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:56.483 [2024-11-20 18:04:22.946737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:22.946842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:56.483 [2024-11-20 18:04:22.946865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3806.207 ms 00:29:56.483 [2024-11-20 18:04:22.946881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:22.994128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:22.994194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:56.483 [2024-11-20 18:04:22.994215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.983 ms 00:29:56.483 [2024-11-20 18:04:22.994231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:22.994324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:22.994343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:56.483 [2024-11-20 18:04:22.994355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:29:56.483 [2024-11-20 18:04:22.994378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.047635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.047708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:56.483 [2024-11-20 18:04:23.047725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.281 ms 00:29:56.483 [2024-11-20 18:04:23.047739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.047789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.047811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:56.483 [2024-11-20 18:04:23.047824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:56.483 [2024-11-20 18:04:23.047842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.048679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.048704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:56.483 [2024-11-20 18:04:23.048716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.780 ms 00:29:56.483 [2024-11-20 18:04:23.048730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.048798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.048814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:56.483 [2024-11-20 18:04:23.048828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:29:56.483 [2024-11-20 18:04:23.048846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.072714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.072761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:56.483 [2024-11-20 18:04:23.072792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.883 ms 00:29:56.483 [2024-11-20 18:04:23.072807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.099020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:56.483 [2024-11-20 18:04:23.100750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.101026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:56.483 [2024-11-20 18:04:23.101060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.891 ms 00:29:56.483 [2024-11-20 18:04:23.101074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.137173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.137211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:56.483 [2024-11-20 18:04:23.137230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.115 ms 00:29:56.483 [2024-11-20 18:04:23.137242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.137343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.137360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:56.483 [2024-11-20 18:04:23.137385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:29:56.483 [2024-11-20 18:04:23.137397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.173002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.173040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:56.483 [2024-11-20 18:04:23.173058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.606 ms 00:29:56.483 [2024-11-20 18:04:23.173070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.483 [2024-11-20 18:04:23.208737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.483 [2024-11-20 18:04:23.208929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:56.484 [2024-11-20 18:04:23.208958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.673 ms 00:29:56.484 [2024-11-20 18:04:23.208969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.209739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.209762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:56.484 [2024-11-20 18:04:23.209796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.700 ms 00:29:56.484 [2024-11-20 18:04:23.209811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.315561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.315599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:56.484 [2024-11-20 18:04:23.315622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 105.845 ms 00:29:56.484 [2024-11-20 18:04:23.315633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.352969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.353147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:56.484 [2024-11-20 18:04:23.353186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 37.297 ms 00:29:56.484 [2024-11-20 18:04:23.353198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.388400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.388565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:56.484 [2024-11-20 18:04:23.388593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.189 ms 00:29:56.484 [2024-11-20 18:04:23.388604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.424361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.424395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:56.484 [2024-11-20 18:04:23.424413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.732 ms 00:29:56.484 [2024-11-20 18:04:23.424423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.424473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.424485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:56.484 [2024-11-20 18:04:23.424504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:29:56.484 [2024-11-20 18:04:23.424514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.424640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:56.484 [2024-11-20 18:04:23.424654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:56.484 [2024-11-20 18:04:23.424672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:29:56.484 [2024-11-20 18:04:23.424683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:56.484 [2024-11-20 18:04:23.426136] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4311.860 ms, result 0 00:29:56.484 { 00:29:56.484 "name": "ftl", 00:29:56.484 "uuid": "2709fbf5-d0b9-4ebc-bb91-2e95d7678b7f" 00:29:56.484 } 00:29:56.484 18:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:56.484 [2024-11-20 18:04:23.640590] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:56.484 18:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:56.743 18:04:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:57.001 [2024-11-20 18:04:24.012242] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:57.001 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:57.260 [2024-11-20 18:04:24.217942] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:57.260 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:57.519 Fill FTL, iteration 1 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83700 00:29:57.519 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83700 /var/tmp/spdk.tgt.sock 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83700 ']' 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:57.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.520 18:04:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:57.520 [2024-11-20 18:04:24.676452] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:29:57.520 [2024-11-20 18:04:24.677008] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83700 ] 00:29:57.779 [2024-11-20 18:04:24.859135] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:58.036 [2024-11-20 18:04:24.972754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:58.972 18:04:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:58.972 18:04:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:58.972 18:04:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:59.231 ftln1 00:29:59.231 18:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:59.231 18:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83700 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83700 ']' 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83700 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83700 00:29:59.489 killing process with pid 83700 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83700' 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83700 00:29:59.489 18:04:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83700 00:30:02.024 18:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:30:02.024 18:04:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:30:02.024 [2024-11-20 18:04:29.193589] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:02.024 [2024-11-20 18:04:29.193723] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83764 ] 00:30:02.283 [2024-11-20 18:04:29.371300] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.541 [2024-11-20 18:04:29.482704] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.918  [2024-11-20T18:04:32.031Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-20T18:04:32.966Z] Copying: 491/1024 [MB] (244 MBps) [2024-11-20T18:04:34.344Z] Copying: 736/1024 [MB] (245 MBps) [2024-11-20T18:04:34.344Z] Copying: 984/1024 [MB] (248 MBps) [2024-11-20T18:04:35.282Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:30:08.106 00:30:08.106 Calculate MD5 checksum, iteration 1 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:08.106 18:04:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:08.366 [2024-11-20 18:04:35.347252] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:08.366 [2024-11-20 18:04:35.347533] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83828 ] 00:30:08.366 [2024-11-20 18:04:35.529632] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:08.626 [2024-11-20 18:04:35.642198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.004  [2024-11-20T18:04:38.117Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-20T18:04:39.054Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:30:11.878 00:30:11.878 18:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:30:11.878 18:04:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=873f85e2d2b36ae96bd124f6339a7e17 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:13.258 Fill FTL, iteration 2 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:13.258 18:04:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:30:13.517 [2024-11-20 18:04:40.504863] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:13.517 [2024-11-20 18:04:40.504987] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83885 ] 00:30:13.517 [2024-11-20 18:04:40.684203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.774 [2024-11-20 18:04:40.792095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:15.148  [2024-11-20T18:04:43.261Z] Copying: 247/1024 [MB] (247 MBps) [2024-11-20T18:04:44.639Z] Copying: 482/1024 [MB] (235 MBps) [2024-11-20T18:04:45.573Z] Copying: 719/1024 [MB] (237 MBps) [2024-11-20T18:04:45.573Z] Copying: 952/1024 [MB] (233 MBps) [2024-11-20T18:04:46.947Z] Copying: 1024/1024 [MB] (average 237 MBps) 00:30:19.771 00:30:19.771 Calculate MD5 checksum, iteration 2 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:19.771 18:04:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:19.771 [2024-11-20 18:04:46.774090] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:19.771 [2024-11-20 18:04:46.774382] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83949 ] 00:30:20.029 [2024-11-20 18:04:46.952936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:20.029 [2024-11-20 18:04:47.072718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.930  [2024-11-20T18:04:49.706Z] Copying: 618/1024 [MB] (618 MBps) [2024-11-20T18:04:51.081Z] Copying: 1024/1024 [MB] (average 614 MBps) 00:30:23.906 00:30:23.906 18:04:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:23.906 18:04:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:25.284 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:25.284 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e8bd9282b0351224aeaeb5046ad56db4 00:30:25.284 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:25.284 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:25.284 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:25.543 [2024-11-20 18:04:52.605917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.543 [2024-11-20 18:04:52.606166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:25.543 [2024-11-20 18:04:52.606197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:30:25.543 [2024-11-20 18:04:52.606210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.543 [2024-11-20 18:04:52.606253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.543 [2024-11-20 18:04:52.606265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:25.543 [2024-11-20 18:04:52.606283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:25.543 [2024-11-20 18:04:52.606295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.543 [2024-11-20 18:04:52.606318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:25.543 [2024-11-20 18:04:52.606330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:25.543 [2024-11-20 18:04:52.606341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:25.543 [2024-11-20 18:04:52.606353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:25.543 [2024-11-20 18:04:52.606446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.513 ms, result 0 00:30:25.543 true 00:30:25.543 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:25.802 { 00:30:25.802 "name": "ftl", 00:30:25.802 "properties": [ 00:30:25.802 { 00:30:25.802 "name": "superblock_version", 00:30:25.802 "value": 5, 00:30:25.802 "read-only": true 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "name": "base_device", 00:30:25.802 "bands": [ 00:30:25.802 { 00:30:25.802 "id": 0, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 1, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 2, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 3, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 4, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 5, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 6, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 7, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 8, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 9, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 10, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 11, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 12, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 13, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 14, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 15, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 16, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 17, 00:30:25.802 "state": "FREE", 00:30:25.802 "validity": 0.0 00:30:25.802 } 00:30:25.802 ], 00:30:25.802 "read-only": true 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "name": "cache_device", 00:30:25.802 "type": "bdev", 00:30:25.802 "chunks": [ 00:30:25.802 { 00:30:25.802 "id": 0, 00:30:25.802 "state": "INACTIVE", 00:30:25.802 "utilization": 0.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 1, 00:30:25.802 "state": "CLOSED", 00:30:25.802 "utilization": 1.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 2, 00:30:25.802 "state": "CLOSED", 00:30:25.802 "utilization": 1.0 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 3, 00:30:25.802 "state": "OPEN", 00:30:25.802 "utilization": 0.001953125 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "id": 4, 00:30:25.802 "state": "OPEN", 00:30:25.802 "utilization": 0.0 00:30:25.802 } 00:30:25.802 ], 00:30:25.802 "read-only": true 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "name": "verbose_mode", 00:30:25.802 "value": true, 00:30:25.802 "unit": "", 00:30:25.802 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:25.802 }, 00:30:25.802 { 00:30:25.802 "name": "prep_upgrade_on_shutdown", 00:30:25.802 "value": false, 00:30:25.802 "unit": "", 00:30:25.802 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:25.802 } 00:30:25.802 ] 00:30:25.802 } 00:30:25.802 18:04:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:26.061 [2024-11-20 18:04:53.029575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.061 [2024-11-20 18:04:53.029786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.061 [2024-11-20 18:04:53.029813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:26.061 [2024-11-20 18:04:53.029825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.061 [2024-11-20 18:04:53.029862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.061 [2024-11-20 18:04:53.029875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.061 [2024-11-20 18:04:53.029887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:26.061 [2024-11-20 18:04:53.029897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.061 [2024-11-20 18:04:53.029918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.061 [2024-11-20 18:04:53.029930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.061 [2024-11-20 18:04:53.029941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:26.061 [2024-11-20 18:04:53.029952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.061 [2024-11-20 18:04:53.030014] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.422 ms, result 0 00:30:26.061 true 00:30:26.061 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:26.061 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.061 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:26.320 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:26.320 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:26.320 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:26.320 [2024-11-20 18:04:53.449543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.320 [2024-11-20 18:04:53.449594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:26.320 [2024-11-20 18:04:53.449609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:26.320 [2024-11-20 18:04:53.449621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.320 [2024-11-20 18:04:53.449645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.320 [2024-11-20 18:04:53.449656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:26.320 [2024-11-20 18:04:53.449668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:26.320 [2024-11-20 18:04:53.449677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.320 [2024-11-20 18:04:53.449697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:26.320 [2024-11-20 18:04:53.449709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:26.320 [2024-11-20 18:04:53.449720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:26.320 [2024-11-20 18:04:53.449730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:26.320 [2024-11-20 18:04:53.449802] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.231 ms, result 0 00:30:26.320 true 00:30:26.320 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:26.579 { 00:30:26.579 "name": "ftl", 00:30:26.579 "properties": [ 00:30:26.579 { 00:30:26.579 "name": "superblock_version", 00:30:26.579 "value": 5, 00:30:26.579 "read-only": true 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "name": "base_device", 00:30:26.579 "bands": [ 00:30:26.579 { 00:30:26.579 "id": 0, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 1, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 2, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 3, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 4, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 5, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 6, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 7, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 8, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 9, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 10, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 11, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 12, 00:30:26.579 "state": "FREE", 00:30:26.579 "validity": 0.0 00:30:26.579 }, 00:30:26.579 { 00:30:26.579 "id": 13, 00:30:26.579 "state": "FREE", 00:30:26.580 "validity": 0.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 14, 00:30:26.580 "state": "FREE", 00:30:26.580 "validity": 0.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 15, 00:30:26.580 "state": "FREE", 00:30:26.580 "validity": 0.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 16, 00:30:26.580 "state": "FREE", 00:30:26.580 "validity": 0.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 17, 00:30:26.580 "state": "FREE", 00:30:26.580 "validity": 0.0 00:30:26.580 } 00:30:26.580 ], 00:30:26.580 "read-only": true 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "name": "cache_device", 00:30:26.580 "type": "bdev", 00:30:26.580 "chunks": [ 00:30:26.580 { 00:30:26.580 "id": 0, 00:30:26.580 "state": "INACTIVE", 00:30:26.580 "utilization": 0.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 1, 00:30:26.580 "state": "CLOSED", 00:30:26.580 "utilization": 1.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 2, 00:30:26.580 "state": "CLOSED", 00:30:26.580 "utilization": 1.0 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 3, 00:30:26.580 "state": "OPEN", 00:30:26.580 "utilization": 0.001953125 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "id": 4, 00:30:26.580 "state": "OPEN", 00:30:26.580 "utilization": 0.0 00:30:26.580 } 00:30:26.580 ], 00:30:26.580 "read-only": true 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "name": "verbose_mode", 00:30:26.580 "value": true, 00:30:26.580 "unit": "", 00:30:26.580 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:26.580 }, 00:30:26.580 { 00:30:26.580 "name": "prep_upgrade_on_shutdown", 00:30:26.580 "value": true, 00:30:26.580 "unit": "", 00:30:26.580 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:26.580 } 00:30:26.580 ] 00:30:26.580 } 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83572 ]] 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83572 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83572 ']' 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83572 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83572 00:30:26.580 killing process with pid 83572 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83572' 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83572 00:30:26.580 18:04:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83572 00:30:27.959 [2024-11-20 18:04:54.949831] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:27.959 [2024-11-20 18:04:54.970310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.959 [2024-11-20 18:04:54.970357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:27.959 [2024-11-20 18:04:54.970375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:27.959 [2024-11-20 18:04:54.970387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:27.959 [2024-11-20 18:04:54.970412] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:27.959 [2024-11-20 18:04:54.975156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:27.959 [2024-11-20 18:04:54.975207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:27.959 [2024-11-20 18:04:54.975221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.732 ms 00:30:27.959 [2024-11-20 18:04:54.975233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.219149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.219442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:36.094 [2024-11-20 18:05:02.219473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7255.633 ms 00:30:36.094 [2024-11-20 18:05:02.219494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.220685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.220725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:36.094 [2024-11-20 18:05:02.220740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.167 ms 00:30:36.094 [2024-11-20 18:05:02.220752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.221705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.221735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:36.094 [2024-11-20 18:05:02.221749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.911 ms 00:30:36.094 [2024-11-20 18:05:02.221777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.237277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.237314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:36.094 [2024-11-20 18:05:02.237328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.475 ms 00:30:36.094 [2024-11-20 18:05:02.237340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.246882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.246917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:36.094 [2024-11-20 18:05:02.246931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.515 ms 00:30:36.094 [2024-11-20 18:05:02.246942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.247054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.247069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:36.094 [2024-11-20 18:05:02.247087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.076 ms 00:30:36.094 [2024-11-20 18:05:02.247098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.261178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.261353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:36.094 [2024-11-20 18:05:02.261381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.076 ms 00:30:36.094 [2024-11-20 18:05:02.261409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.275969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.276129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:36.094 [2024-11-20 18:05:02.276149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.511 ms 00:30:36.094 [2024-11-20 18:05:02.276160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.290799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.290957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:36.094 [2024-11-20 18:05:02.290977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.591 ms 00:30:36.094 [2024-11-20 18:05:02.290988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.305314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.305348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:36.094 [2024-11-20 18:05:02.305367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.185 ms 00:30:36.094 [2024-11-20 18:05:02.305377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.305412] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:36.094 [2024-11-20 18:05:02.305429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:36.094 [2024-11-20 18:05:02.305443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:36.094 [2024-11-20 18:05:02.305468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:36.094 [2024-11-20 18:05:02.305481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:36.094 [2024-11-20 18:05:02.305648] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:36.094 [2024-11-20 18:05:02.305659] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2709fbf5-d0b9-4ebc-bb91-2e95d7678b7f 00:30:36.094 [2024-11-20 18:05:02.305671] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:36.094 [2024-11-20 18:05:02.305681] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:36.094 [2024-11-20 18:05:02.305691] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:36.094 [2024-11-20 18:05:02.305703] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:36.094 [2024-11-20 18:05:02.305714] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:36.094 [2024-11-20 18:05:02.305729] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:36.094 [2024-11-20 18:05:02.305740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:36.094 [2024-11-20 18:05:02.305749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:36.094 [2024-11-20 18:05:02.305760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:36.094 [2024-11-20 18:05:02.305802] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.305818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:36.094 [2024-11-20 18:05:02.305830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.391 ms 00:30:36.094 [2024-11-20 18:05:02.305841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.326421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.326456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:36.094 [2024-11-20 18:05:02.326470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.563 ms 00:30:36.094 [2024-11-20 18:05:02.326487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.327041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:36.094 [2024-11-20 18:05:02.327062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:36.094 [2024-11-20 18:05:02.327073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.532 ms 00:30:36.094 [2024-11-20 18:05:02.327083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.394495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.094 [2024-11-20 18:05:02.394531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:36.094 [2024-11-20 18:05:02.394550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.094 [2024-11-20 18:05:02.394562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.094 [2024-11-20 18:05:02.394597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.094 [2024-11-20 18:05:02.394609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:36.095 [2024-11-20 18:05:02.394621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.394631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.394730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.394745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:36.095 [2024-11-20 18:05:02.394757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.394772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.394819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.394841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:36.095 [2024-11-20 18:05:02.394853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.394865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.526736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.526811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:36.095 [2024-11-20 18:05:02.526835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.526846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:36.095 [2024-11-20 18:05:02.631143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:36.095 [2024-11-20 18:05:02.631311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:36.095 [2024-11-20 18:05:02.631410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:36.095 [2024-11-20 18:05:02.631572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:36.095 [2024-11-20 18:05:02.631658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:36.095 [2024-11-20 18:05:02.631740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.631835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:36.095 [2024-11-20 18:05:02.631850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:36.095 [2024-11-20 18:05:02.631861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:36.095 [2024-11-20 18:05:02.631873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:36.095 [2024-11-20 18:05:02.632018] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7674.113 ms, result 0 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84146 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84146 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84146 ']' 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:39.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:39.378 18:05:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:39.378 [2024-11-20 18:05:06.331876] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:39.378 [2024-11-20 18:05:06.332150] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84146 ] 00:30:39.378 [2024-11-20 18:05:06.507728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:39.636 [2024-11-20 18:05:06.646538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:40.571 [2024-11-20 18:05:07.708303] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:40.571 [2024-11-20 18:05:07.708387] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:40.830 [2024-11-20 18:05:07.856581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.830 [2024-11-20 18:05:07.856637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:40.830 [2024-11-20 18:05:07.856655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:40.830 [2024-11-20 18:05:07.856666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.830 [2024-11-20 18:05:07.856734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.830 [2024-11-20 18:05:07.856748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:40.830 [2024-11-20 18:05:07.856759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:40.830 [2024-11-20 18:05:07.856784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.830 [2024-11-20 18:05:07.856810] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:40.830 [2024-11-20 18:05:07.857786] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:40.830 [2024-11-20 18:05:07.857818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.830 [2024-11-20 18:05:07.857829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:40.830 [2024-11-20 18:05:07.857841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.015 ms 00:30:40.830 [2024-11-20 18:05:07.857851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.830 [2024-11-20 18:05:07.860241] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:40.830 [2024-11-20 18:05:07.880165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.830 [2024-11-20 18:05:07.880217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:40.831 [2024-11-20 18:05:07.880240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.957 ms 00:30:40.831 [2024-11-20 18:05:07.880251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.880319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.880332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:40.831 [2024-11-20 18:05:07.880344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:30:40.831 [2024-11-20 18:05:07.880355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.892328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.892357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:40.831 [2024-11-20 18:05:07.892371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.912 ms 00:30:40.831 [2024-11-20 18:05:07.892381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.892451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.892465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:40.831 [2024-11-20 18:05:07.892477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:30:40.831 [2024-11-20 18:05:07.892487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.892548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.892560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:40.831 [2024-11-20 18:05:07.892577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:40.831 [2024-11-20 18:05:07.892587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.892616] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:40.831 [2024-11-20 18:05:07.898289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.898323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:40.831 [2024-11-20 18:05:07.898337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.689 ms 00:30:40.831 [2024-11-20 18:05:07.898355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.898386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.898398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:40.831 [2024-11-20 18:05:07.898410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:40.831 [2024-11-20 18:05:07.898421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.898462] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:40.831 [2024-11-20 18:05:07.898488] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:40.831 [2024-11-20 18:05:07.898532] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:40.831 [2024-11-20 18:05:07.898552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:40.831 [2024-11-20 18:05:07.898650] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:40.831 [2024-11-20 18:05:07.898665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:40.831 [2024-11-20 18:05:07.898680] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:40.831 [2024-11-20 18:05:07.898705] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:40.831 [2024-11-20 18:05:07.898718] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:40.831 [2024-11-20 18:05:07.898734] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:40.831 [2024-11-20 18:05:07.898744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:40.831 [2024-11-20 18:05:07.898754] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:40.831 [2024-11-20 18:05:07.898765] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:40.831 [2024-11-20 18:05:07.898793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.898804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:40.831 [2024-11-20 18:05:07.898816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.334 ms 00:30:40.831 [2024-11-20 18:05:07.898827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.898900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.831 [2024-11-20 18:05:07.898912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:40.831 [2024-11-20 18:05:07.898923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:40.831 [2024-11-20 18:05:07.898938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.831 [2024-11-20 18:05:07.899031] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:40.831 [2024-11-20 18:05:07.899044] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:40.831 [2024-11-20 18:05:07.899056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899066] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899078] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:40.831 [2024-11-20 18:05:07.899087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:40.831 [2024-11-20 18:05:07.899108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:40.831 [2024-11-20 18:05:07.899118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:40.831 [2024-11-20 18:05:07.899127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:40.831 [2024-11-20 18:05:07.899147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:40.831 [2024-11-20 18:05:07.899157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:40.831 [2024-11-20 18:05:07.899176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:40.831 [2024-11-20 18:05:07.899187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:40.831 [2024-11-20 18:05:07.899205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:40.831 [2024-11-20 18:05:07.899214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899224] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:40.831 [2024-11-20 18:05:07.899234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:40.831 [2024-11-20 18:05:07.899262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899272] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:40.831 [2024-11-20 18:05:07.899304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:40.831 [2024-11-20 18:05:07.899333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:40.831 [2024-11-20 18:05:07.899362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:40.831 [2024-11-20 18:05:07.899390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:40.831 [2024-11-20 18:05:07.899417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:40.831 [2024-11-20 18:05:07.899443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:40.831 [2024-11-20 18:05:07.899452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899462] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:40.831 [2024-11-20 18:05:07.899473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:40.831 [2024-11-20 18:05:07.899483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:40.831 [2024-11-20 18:05:07.899507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:40.831 [2024-11-20 18:05:07.899517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:40.831 [2024-11-20 18:05:07.899526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:40.831 [2024-11-20 18:05:07.899535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:40.831 [2024-11-20 18:05:07.899545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:40.831 [2024-11-20 18:05:07.899555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:40.831 [2024-11-20 18:05:07.899566] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:40.831 [2024-11-20 18:05:07.899579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:40.832 [2024-11-20 18:05:07.899603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899613] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:40.832 [2024-11-20 18:05:07.899635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:40.832 [2024-11-20 18:05:07.899645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:40.832 [2024-11-20 18:05:07.899655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:40.832 [2024-11-20 18:05:07.899667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:40.832 [2024-11-20 18:05:07.899738] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:40.832 [2024-11-20 18:05:07.899751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:40.832 [2024-11-20 18:05:07.899786] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:40.832 [2024-11-20 18:05:07.899797] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:40.832 [2024-11-20 18:05:07.899807] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:40.832 [2024-11-20 18:05:07.899822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:40.832 [2024-11-20 18:05:07.899834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:40.832 [2024-11-20 18:05:07.899845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.845 ms 00:30:40.832 [2024-11-20 18:05:07.899855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:40.832 [2024-11-20 18:05:07.899904] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:40.832 [2024-11-20 18:05:07.899918] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:45.023 [2024-11-20 18:05:11.729946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.730041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:45.023 [2024-11-20 18:05:11.730062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3836.258 ms 00:30:45.023 [2024-11-20 18:05:11.730074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.771945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.772004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:45.023 [2024-11-20 18:05:11.772022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.613 ms 00:30:45.023 [2024-11-20 18:05:11.772033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.772130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.772151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:45.023 [2024-11-20 18:05:11.772163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:45.023 [2024-11-20 18:05:11.772173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.823687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.823737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:45.023 [2024-11-20 18:05:11.823753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.526 ms 00:30:45.023 [2024-11-20 18:05:11.823778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.823823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.823834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:45.023 [2024-11-20 18:05:11.823845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:45.023 [2024-11-20 18:05:11.823855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.824650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.824672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:45.023 [2024-11-20 18:05:11.824685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.736 ms 00:30:45.023 [2024-11-20 18:05:11.824695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.824742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.824752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:45.023 [2024-11-20 18:05:11.824763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:45.023 [2024-11-20 18:05:11.824784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.849590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.849631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:45.023 [2024-11-20 18:05:11.849645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.822 ms 00:30:45.023 [2024-11-20 18:05:11.849656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.899644] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:45.023 [2024-11-20 18:05:11.899690] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:45.023 [2024-11-20 18:05:11.899709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.899720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:45.023 [2024-11-20 18:05:11.899733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.991 ms 00:30:45.023 [2024-11-20 18:05:11.899744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.919414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.919452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:45.023 [2024-11-20 18:05:11.919467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.637 ms 00:30:45.023 [2024-11-20 18:05:11.919478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.936391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.936442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:45.023 [2024-11-20 18:05:11.936456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.890 ms 00:30:45.023 [2024-11-20 18:05:11.936466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.952648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.952881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:45.023 [2024-11-20 18:05:11.952905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.164 ms 00:30:45.023 [2024-11-20 18:05:11.952916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:11.953700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:11.953732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:45.023 [2024-11-20 18:05:11.953745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:30:45.023 [2024-11-20 18:05:11.953756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.045757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:12.045838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:45.023 [2024-11-20 18:05:12.045855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 92.109 ms 00:30:45.023 [2024-11-20 18:05:12.045866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.055969] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:45.023 [2024-11-20 18:05:12.057124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:12.057154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:45.023 [2024-11-20 18:05:12.057169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.225 ms 00:30:45.023 [2024-11-20 18:05:12.057180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.057266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:12.057285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:45.023 [2024-11-20 18:05:12.057297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:45.023 [2024-11-20 18:05:12.057308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.057398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:12.057427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:45.023 [2024-11-20 18:05:12.057440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:45.023 [2024-11-20 18:05:12.057451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.057477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.023 [2024-11-20 18:05:12.057489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:45.023 [2024-11-20 18:05:12.057505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:45.023 [2024-11-20 18:05:12.057517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.023 [2024-11-20 18:05:12.057559] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:45.024 [2024-11-20 18:05:12.057574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.024 [2024-11-20 18:05:12.057586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:45.024 [2024-11-20 18:05:12.057596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:30:45.024 [2024-11-20 18:05:12.057617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.024 [2024-11-20 18:05:12.091283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.024 [2024-11-20 18:05:12.091330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:45.024 [2024-11-20 18:05:12.091344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.697 ms 00:30:45.024 [2024-11-20 18:05:12.091355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.024 [2024-11-20 18:05:12.091437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.024 [2024-11-20 18:05:12.091451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:45.024 [2024-11-20 18:05:12.091463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:30:45.024 [2024-11-20 18:05:12.091473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.024 [2024-11-20 18:05:12.093022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4242.763 ms, result 0 00:30:45.024 [2024-11-20 18:05:12.107643] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.024 [2024-11-20 18:05:12.123610] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:45.024 [2024-11-20 18:05:12.132309] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:45.284 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.284 18:05:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:45.284 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:45.284 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:45.284 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:45.570 [2024-11-20 18:05:12.587907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.570 [2024-11-20 18:05:12.588122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:45.570 [2024-11-20 18:05:12.588255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:30:45.570 [2024-11-20 18:05:12.588305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.570 [2024-11-20 18:05:12.588361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.570 [2024-11-20 18:05:12.588397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:45.570 [2024-11-20 18:05:12.588429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:45.570 [2024-11-20 18:05:12.588524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.570 [2024-11-20 18:05:12.588559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:45.570 [2024-11-20 18:05:12.588571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:45.570 [2024-11-20 18:05:12.588583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:45.570 [2024-11-20 18:05:12.588594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:45.570 [2024-11-20 18:05:12.588660] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.740 ms, result 0 00:30:45.570 true 00:30:45.570 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:45.829 { 00:30:45.829 "name": "ftl", 00:30:45.829 "properties": [ 00:30:45.829 { 00:30:45.829 "name": "superblock_version", 00:30:45.829 "value": 5, 00:30:45.829 "read-only": true 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "name": "base_device", 00:30:45.829 "bands": [ 00:30:45.829 { 00:30:45.829 "id": 0, 00:30:45.829 "state": "CLOSED", 00:30:45.829 "validity": 1.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 1, 00:30:45.829 "state": "CLOSED", 00:30:45.829 "validity": 1.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 2, 00:30:45.829 "state": "CLOSED", 00:30:45.829 "validity": 0.007843137254901933 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 3, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 4, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 5, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 6, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 7, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 8, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 9, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 10, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 11, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 12, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.829 "id": 13, 00:30:45.829 "state": "FREE", 00:30:45.829 "validity": 0.0 00:30:45.829 }, 00:30:45.829 { 00:30:45.830 "id": 14, 00:30:45.830 "state": "FREE", 00:30:45.830 "validity": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 15, 00:30:45.830 "state": "FREE", 00:30:45.830 "validity": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 16, 00:30:45.830 "state": "FREE", 00:30:45.830 "validity": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 17, 00:30:45.830 "state": "FREE", 00:30:45.830 "validity": 0.0 00:30:45.830 } 00:30:45.830 ], 00:30:45.830 "read-only": true 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "name": "cache_device", 00:30:45.830 "type": "bdev", 00:30:45.830 "chunks": [ 00:30:45.830 { 00:30:45.830 "id": 0, 00:30:45.830 "state": "INACTIVE", 00:30:45.830 "utilization": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 1, 00:30:45.830 "state": "OPEN", 00:30:45.830 "utilization": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 2, 00:30:45.830 "state": "OPEN", 00:30:45.830 "utilization": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 3, 00:30:45.830 "state": "FREE", 00:30:45.830 "utilization": 0.0 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "id": 4, 00:30:45.830 "state": "FREE", 00:30:45.830 "utilization": 0.0 00:30:45.830 } 00:30:45.830 ], 00:30:45.830 "read-only": true 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "name": "verbose_mode", 00:30:45.830 "value": true, 00:30:45.830 "unit": "", 00:30:45.830 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:45.830 }, 00:30:45.830 { 00:30:45.830 "name": "prep_upgrade_on_shutdown", 00:30:45.830 "value": false, 00:30:45.830 "unit": "", 00:30:45.830 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:45.830 } 00:30:45.830 ] 00:30:45.830 } 00:30:45.830 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:45.830 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:45.830 18:05:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:46.089 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:46.089 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:46.089 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:46.089 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:46.089 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:46.348 Validate MD5 checksum, iteration 1 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:46.348 18:05:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:46.348 [2024-11-20 18:05:13.353924] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:46.348 [2024-11-20 18:05:13.354364] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84236 ] 00:30:46.606 [2024-11-20 18:05:13.534891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.606 [2024-11-20 18:05:13.648362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:48.509  [2024-11-20T18:05:16.252Z] Copying: 621/1024 [MB] (621 MBps) [2024-11-20T18:05:17.629Z] Copying: 1024/1024 [MB] (average 611 MBps) 00:30:50.453 00:30:50.712 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:50.712 18:05:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=873f85e2d2b36ae96bd124f6339a7e17 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 873f85e2d2b36ae96bd124f6339a7e17 != \8\7\3\f\8\5\e\2\d\2\b\3\6\a\e\9\6\b\d\1\2\4\f\6\3\3\9\a\7\e\1\7 ]] 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:52.616 Validate MD5 checksum, iteration 2 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:52.616 18:05:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:52.616 [2024-11-20 18:05:19.495165] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:52.616 [2024-11-20 18:05:19.495628] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84303 ] 00:30:52.616 [2024-11-20 18:05:19.677669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.616 [2024-11-20 18:05:19.787462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:54.521  [2024-11-20T18:05:22.266Z] Copying: 618/1024 [MB] (618 MBps) [2024-11-20T18:05:23.643Z] Copying: 1024/1024 [MB] (average 608 MBps) 00:30:56.467 00:30:56.467 18:05:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:56.467 18:05:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e8bd9282b0351224aeaeb5046ad56db4 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e8bd9282b0351224aeaeb5046ad56db4 != \e\8\b\d\9\2\8\2\b\0\3\5\1\2\2\4\a\e\a\e\b\5\0\4\6\a\d\5\6\d\b\4 ]] 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84146 ]] 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84146 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84364 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84364 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84364 ']' 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:58.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:58.371 18:05:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:58.371 [2024-11-20 18:05:25.315185] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:30:58.371 [2024-11-20 18:05:25.315333] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84364 ] 00:30:58.371 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84146 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:58.371 [2024-11-20 18:05:25.497913] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:58.630 [2024-11-20 18:05:25.626765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.569 [2024-11-20 18:05:26.678078] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:59.569 [2024-11-20 18:05:26.678159] bdev.c:8485:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:59.831 [2024-11-20 18:05:26.826166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.826212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:59.831 [2024-11-20 18:05:26.826229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:59.831 [2024-11-20 18:05:26.826240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.826303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.826317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:59.831 [2024-11-20 18:05:26.826329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.043 ms 00:30:59.831 [2024-11-20 18:05:26.826338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.826361] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:59.831 [2024-11-20 18:05:26.827247] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:59.831 [2024-11-20 18:05:26.827272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.827283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:59.831 [2024-11-20 18:05:26.827294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:30:59.831 [2024-11-20 18:05:26.827304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.827630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:59.831 [2024-11-20 18:05:26.851502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.851541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:59.831 [2024-11-20 18:05:26.851557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.912 ms 00:30:59.831 [2024-11-20 18:05:26.851568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.865238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.865274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:59.831 [2024-11-20 18:05:26.865291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:30:59.831 [2024-11-20 18:05:26.865301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.865820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.865837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:59.831 [2024-11-20 18:05:26.865848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.413 ms 00:30:59.831 [2024-11-20 18:05:26.865859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.865925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.865939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:59.831 [2024-11-20 18:05:26.865950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:30:59.831 [2024-11-20 18:05:26.865959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.865987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.865999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:59.831 [2024-11-20 18:05:26.866010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:59.831 [2024-11-20 18:05:26.866020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.866044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:59.831 [2024-11-20 18:05:26.869794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.869825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:59.831 [2024-11-20 18:05:26.869837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.761 ms 00:30:59.831 [2024-11-20 18:05:26.869847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.869878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.831 [2024-11-20 18:05:26.869888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:59.831 [2024-11-20 18:05:26.869899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:59.831 [2024-11-20 18:05:26.869908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.831 [2024-11-20 18:05:26.869947] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:59.831 [2024-11-20 18:05:26.869972] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:59.832 [2024-11-20 18:05:26.870007] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:59.832 [2024-11-20 18:05:26.870029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:59.832 [2024-11-20 18:05:26.870114] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:59.832 [2024-11-20 18:05:26.870128] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:59.832 [2024-11-20 18:05:26.870141] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:59.832 [2024-11-20 18:05:26.870154] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870166] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870177] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:59.832 [2024-11-20 18:05:26.870187] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:59.832 [2024-11-20 18:05:26.870198] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:59.832 [2024-11-20 18:05:26.870207] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:59.832 [2024-11-20 18:05:26.870217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.832 [2024-11-20 18:05:26.870229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:59.832 [2024-11-20 18:05:26.870240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.272 ms 00:30:59.832 [2024-11-20 18:05:26.870249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.832 [2024-11-20 18:05:26.870315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.832 [2024-11-20 18:05:26.870326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:59.832 [2024-11-20 18:05:26.870337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.050 ms 00:30:59.832 [2024-11-20 18:05:26.870347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.832 [2024-11-20 18:05:26.870429] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:59.832 [2024-11-20 18:05:26.870440] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:59.832 [2024-11-20 18:05:26.870455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:59.832 [2024-11-20 18:05:26.870484] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870494] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:59.832 [2024-11-20 18:05:26.870504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:59.832 [2024-11-20 18:05:26.870513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:59.832 [2024-11-20 18:05:26.870522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:59.832 [2024-11-20 18:05:26.870540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:59.832 [2024-11-20 18:05:26.870549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:59.832 [2024-11-20 18:05:26.870568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:59.832 [2024-11-20 18:05:26.870577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:59.832 [2024-11-20 18:05:26.870594] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:59.832 [2024-11-20 18:05:26.870603] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:59.832 [2024-11-20 18:05:26.870621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:59.832 [2024-11-20 18:05:26.870659] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:59.832 [2024-11-20 18:05:26.870685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870693] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:59.832 [2024-11-20 18:05:26.870711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:59.832 [2024-11-20 18:05:26.870738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:59.832 [2024-11-20 18:05:26.870763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:59.832 [2024-11-20 18:05:26.870804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:59.832 [2024-11-20 18:05:26.870833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:59.832 [2024-11-20 18:05:26.870842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870850] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:59.832 [2024-11-20 18:05:26.870861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:59.832 [2024-11-20 18:05:26.870870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:59.832 [2024-11-20 18:05:26.870891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:59.832 [2024-11-20 18:05:26.870901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:59.832 [2024-11-20 18:05:26.870910] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:59.832 [2024-11-20 18:05:26.870919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:59.832 [2024-11-20 18:05:26.870928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:59.832 [2024-11-20 18:05:26.870938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:59.832 [2024-11-20 18:05:26.870949] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:59.832 [2024-11-20 18:05:26.870961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.870973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:59.832 [2024-11-20 18:05:26.870983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.870993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.871003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:59.832 [2024-11-20 18:05:26.871013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:59.832 [2024-11-20 18:05:26.871023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:59.832 [2024-11-20 18:05:26.871033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:59.832 [2024-11-20 18:05:26.871043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.871053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.871063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.871073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:59.832 [2024-11-20 18:05:26.871083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:59.833 [2024-11-20 18:05:26.871093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:59.833 [2024-11-20 18:05:26.871103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:59.833 [2024-11-20 18:05:26.871113] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:59.833 [2024-11-20 18:05:26.871124] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:59.833 [2024-11-20 18:05:26.871140] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:59.833 [2024-11-20 18:05:26.871151] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:59.833 [2024-11-20 18:05:26.871162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:59.833 [2024-11-20 18:05:26.871173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:59.833 [2024-11-20 18:05:26.871184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.871193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:59.833 [2024-11-20 18:05:26.871203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.806 ms 00:30:59.833 [2024-11-20 18:05:26.871214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.912861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.912895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:59.833 [2024-11-20 18:05:26.912909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.662 ms 00:30:59.833 [2024-11-20 18:05:26.912919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.912959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.912971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:59.833 [2024-11-20 18:05:26.912981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:30:59.833 [2024-11-20 18:05:26.912991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.962738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.962792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:59.833 [2024-11-20 18:05:26.962807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.771 ms 00:30:59.833 [2024-11-20 18:05:26.962819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.962858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.962869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:59.833 [2024-11-20 18:05:26.962881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:59.833 [2024-11-20 18:05:26.962892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.963037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.963051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:59.833 [2024-11-20 18:05:26.963064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.055 ms 00:30:59.833 [2024-11-20 18:05:26.963075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.963122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.963133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:59.833 [2024-11-20 18:05:26.963144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:30:59.833 [2024-11-20 18:05:26.963155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.987646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.987682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:59.833 [2024-11-20 18:05:26.987697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.503 ms 00:30:59.833 [2024-11-20 18:05:26.987713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:59.833 [2024-11-20 18:05:26.987867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:59.833 [2024-11-20 18:05:26.987888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:59.833 [2024-11-20 18:05:26.987901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:59.833 [2024-11-20 18:05:26.987911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.018371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.018410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:31:00.093 [2024-11-20 18:05:27.018425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.486 ms 00:31:00.093 [2024-11-20 18:05:27.018447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.032982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.033016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:31:00.093 [2024-11-20 18:05:27.033039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:31:00.093 [2024-11-20 18:05:27.033049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.124109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.124179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:31:00.093 [2024-11-20 18:05:27.124203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 91.145 ms 00:31:00.093 [2024-11-20 18:05:27.124215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.124430] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:31:00.093 [2024-11-20 18:05:27.124609] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:31:00.093 [2024-11-20 18:05:27.124816] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:31:00.093 [2024-11-20 18:05:27.124991] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:31:00.093 [2024-11-20 18:05:27.125005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.125017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:31:00.093 [2024-11-20 18:05:27.125029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.734 ms 00:31:00.093 [2024-11-20 18:05:27.125041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.125143] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:31:00.093 [2024-11-20 18:05:27.125159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.125175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:31:00.093 [2024-11-20 18:05:27.125187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:31:00.093 [2024-11-20 18:05:27.125198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.147816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.147862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:31:00.093 [2024-11-20 18:05:27.147877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.629 ms 00:31:00.093 [2024-11-20 18:05:27.147887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.161351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.161395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:31:00.093 [2024-11-20 18:05:27.161408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:31:00.093 [2024-11-20 18:05:27.161420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.093 [2024-11-20 18:05:27.161525] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:31:00.093 [2024-11-20 18:05:27.161880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.093 [2024-11-20 18:05:27.161894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:00.093 [2024-11-20 18:05:27.161908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.357 ms 00:31:00.093 [2024-11-20 18:05:27.161918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.661 [2024-11-20 18:05:27.775625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.661 [2024-11-20 18:05:27.775704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:00.661 [2024-11-20 18:05:27.775726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 613.542 ms 00:31:00.661 [2024-11-20 18:05:27.775740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.661 [2024-11-20 18:05:27.781801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.661 [2024-11-20 18:05:27.781848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:00.661 [2024-11-20 18:05:27.781871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.501 ms 00:31:00.661 [2024-11-20 18:05:27.781884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.661 [2024-11-20 18:05:27.782397] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:31:00.661 [2024-11-20 18:05:27.782433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.661 [2024-11-20 18:05:27.782445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:00.661 [2024-11-20 18:05:27.782457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.503 ms 00:31:00.661 [2024-11-20 18:05:27.782469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.661 [2024-11-20 18:05:27.782503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.661 [2024-11-20 18:05:27.782516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:00.661 [2024-11-20 18:05:27.782528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:00.661 [2024-11-20 18:05:27.782539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:00.661 [2024-11-20 18:05:27.782582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 622.069 ms, result 0 00:31:00.661 [2024-11-20 18:05:27.782630] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:31:00.661 [2024-11-20 18:05:27.782762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:00.661 [2024-11-20 18:05:27.782790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:31:00.661 [2024-11-20 18:05:27.782812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.132 ms 00:31:00.661 [2024-11-20 18:05:27.782824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.230 [2024-11-20 18:05:28.398661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.230 [2024-11-20 18:05:28.398755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:31:01.230 [2024-11-20 18:05:28.398791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 615.661 ms 00:31:01.230 [2024-11-20 18:05:28.398803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.405217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.405448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:31:01.489 [2024-11-20 18:05:28.405472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.643 ms 00:31:01.489 [2024-11-20 18:05:28.405485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.406042] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:31:01.489 [2024-11-20 18:05:28.406074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.406086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:31:01.489 [2024-11-20 18:05:28.406098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.546 ms 00:31:01.489 [2024-11-20 18:05:28.406109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.406142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.406154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:31:01.489 [2024-11-20 18:05:28.406166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:01.489 [2024-11-20 18:05:28.406176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.406216] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 624.596 ms, result 0 00:31:01.489 [2024-11-20 18:05:28.406268] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:31:01.489 [2024-11-20 18:05:28.406281] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:31:01.489 [2024-11-20 18:05:28.406295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.406307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:31:01.489 [2024-11-20 18:05:28.406319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1246.816 ms 00:31:01.489 [2024-11-20 18:05:28.406330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.406362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.406374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:31:01.489 [2024-11-20 18:05:28.406392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:31:01.489 [2024-11-20 18:05:28.406402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.418733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:31:01.489 [2024-11-20 18:05:28.418889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.418904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:31:01.489 [2024-11-20 18:05:28.418917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.489 ms 00:31:01.489 [2024-11-20 18:05:28.418928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.419510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.419542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:31:01.489 [2024-11-20 18:05:28.419559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.508 ms 00:31:01.489 [2024-11-20 18:05:28.419570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.421602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.421630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:31:01.489 [2024-11-20 18:05:28.421643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.015 ms 00:31:01.489 [2024-11-20 18:05:28.421654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.421699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.421711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:31:01.489 [2024-11-20 18:05:28.421722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:31:01.489 [2024-11-20 18:05:28.421739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.421861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.421875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:31:01.489 [2024-11-20 18:05:28.421886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:31:01.489 [2024-11-20 18:05:28.421897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.421921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.421932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:31:01.489 [2024-11-20 18:05:28.421944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:31:01.489 [2024-11-20 18:05:28.421954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.421996] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:31:01.489 [2024-11-20 18:05:28.422009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.422020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:31:01.489 [2024-11-20 18:05:28.422031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:31:01.489 [2024-11-20 18:05:28.422042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.422100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:01.489 [2024-11-20 18:05:28.422113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:31:01.489 [2024-11-20 18:05:28.422124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:31:01.489 [2024-11-20 18:05:28.422134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:01.489 [2024-11-20 18:05:28.423358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1599.260 ms, result 0 00:31:01.490 [2024-11-20 18:05:28.435699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.490 [2024-11-20 18:05:28.451672] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:31:01.490 [2024-11-20 18:05:28.462207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:01.490 Validate MD5 checksum, iteration 1 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:01.490 18:05:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:31:01.490 [2024-11-20 18:05:28.600306] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:31:01.490 [2024-11-20 18:05:28.600426] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84399 ] 00:31:01.748 [2024-11-20 18:05:28.784056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.748 [2024-11-20 18:05:28.895971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:03.703  [2024-11-20T18:05:31.446Z] Copying: 609/1024 [MB] (609 MBps) [2024-11-20T18:05:33.393Z] Copying: 1024/1024 [MB] (average 595 MBps) 00:31:06.217 00:31:06.217 18:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:31:06.217 18:05:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:08.121 Validate MD5 checksum, iteration 2 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=873f85e2d2b36ae96bd124f6339a7e17 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 873f85e2d2b36ae96bd124f6339a7e17 != \8\7\3\f\8\5\e\2\d\2\b\3\6\a\e\9\6\b\d\1\2\4\f\6\3\3\9\a\7\e\1\7 ]] 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:31:08.121 18:05:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:31:08.121 [2024-11-20 18:05:34.928721] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:31:08.121 [2024-11-20 18:05:34.929243] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84489 ] 00:31:08.121 [2024-11-20 18:05:35.110570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:08.121 [2024-11-20 18:05:35.230068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:10.023  [2024-11-20T18:05:37.764Z] Copying: 605/1024 [MB] (605 MBps) [2024-11-20T18:05:39.137Z] Copying: 1024/1024 [MB] (average 612 MBps) 00:31:11.961 00:31:11.961 18:05:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:31:11.961 18:05:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e8bd9282b0351224aeaeb5046ad56db4 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e8bd9282b0351224aeaeb5046ad56db4 != \e\8\b\d\9\2\8\2\b\0\3\5\1\2\2\4\a\e\a\e\b\5\0\4\6\a\d\5\6\d\b\4 ]] 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84364 ]] 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84364 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84364 ']' 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84364 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84364 00:31:13.873 killing process with pid 84364 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84364' 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84364 00:31:13.873 18:05:40 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84364 00:31:15.250 [2024-11-20 18:05:42.121205] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:31:15.250 [2024-11-20 18:05:42.140287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.140331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:31:15.250 [2024-11-20 18:05:42.140348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:31:15.250 [2024-11-20 18:05:42.140359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.140385] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:31:15.250 [2024-11-20 18:05:42.144956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.144990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:31:15.250 [2024-11-20 18:05:42.145007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.562 ms 00:31:15.250 [2024-11-20 18:05:42.145018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.145229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.145243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:31:15.250 [2024-11-20 18:05:42.145254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.185 ms 00:31:15.250 [2024-11-20 18:05:42.145264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.146540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.146580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:31:15.250 [2024-11-20 18:05:42.146593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.260 ms 00:31:15.250 [2024-11-20 18:05:42.146604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.147562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.147591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:31:15.250 [2024-11-20 18:05:42.147603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.917 ms 00:31:15.250 [2024-11-20 18:05:42.147614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.162835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.162873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:31:15.250 [2024-11-20 18:05:42.162888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.185 ms 00:31:15.250 [2024-11-20 18:05:42.162906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.171164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.171199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:31:15.250 [2024-11-20 18:05:42.171213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.231 ms 00:31:15.250 [2024-11-20 18:05:42.171224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.171328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.171342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:31:15.250 [2024-11-20 18:05:42.171353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:31:15.250 [2024-11-20 18:05:42.171364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.185827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.185860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:31:15.250 [2024-11-20 18:05:42.185874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.462 ms 00:31:15.250 [2024-11-20 18:05:42.185884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.200387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.200418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:31:15.250 [2024-11-20 18:05:42.200431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.490 ms 00:31:15.250 [2024-11-20 18:05:42.200442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.214744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.214787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:31:15.250 [2024-11-20 18:05:42.214801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.290 ms 00:31:15.250 [2024-11-20 18:05:42.214812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.229458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.250 [2024-11-20 18:05:42.229492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:31:15.250 [2024-11-20 18:05:42.229505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.594 ms 00:31:15.250 [2024-11-20 18:05:42.229515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.250 [2024-11-20 18:05:42.229551] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:31:15.250 [2024-11-20 18:05:42.229569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:31:15.250 [2024-11-20 18:05:42.229582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:31:15.250 [2024-11-20 18:05:42.229594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:31:15.250 [2024-11-20 18:05:42.229605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:15.250 [2024-11-20 18:05:42.229785] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:31:15.251 [2024-11-20 18:05:42.229797] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 2709fbf5-d0b9-4ebc-bb91-2e95d7678b7f 00:31:15.251 [2024-11-20 18:05:42.229808] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:31:15.251 [2024-11-20 18:05:42.229819] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:31:15.251 [2024-11-20 18:05:42.229829] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:31:15.251 [2024-11-20 18:05:42.229841] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:31:15.251 [2024-11-20 18:05:42.229851] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:31:15.251 [2024-11-20 18:05:42.229863] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:31:15.251 [2024-11-20 18:05:42.229874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:31:15.251 [2024-11-20 18:05:42.229884] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:31:15.251 [2024-11-20 18:05:42.229894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:31:15.251 [2024-11-20 18:05:42.229911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.251 [2024-11-20 18:05:42.229928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:31:15.251 [2024-11-20 18:05:42.229939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:31:15.251 [2024-11-20 18:05:42.229951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.250084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.251 [2024-11-20 18:05:42.250118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:31:15.251 [2024-11-20 18:05:42.250131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.134 ms 00:31:15.251 [2024-11-20 18:05:42.250142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.250686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:31:15.251 [2024-11-20 18:05:42.250699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:31:15.251 [2024-11-20 18:05:42.250710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.522 ms 00:31:15.251 [2024-11-20 18:05:42.250721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.319236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.251 [2024-11-20 18:05:42.319272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:31:15.251 [2024-11-20 18:05:42.319286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.251 [2024-11-20 18:05:42.319296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.319337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.251 [2024-11-20 18:05:42.319349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:31:15.251 [2024-11-20 18:05:42.319360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.251 [2024-11-20 18:05:42.319370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.319448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.251 [2024-11-20 18:05:42.319462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:31:15.251 [2024-11-20 18:05:42.319473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.251 [2024-11-20 18:05:42.319484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.251 [2024-11-20 18:05:42.319502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.251 [2024-11-20 18:05:42.319534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:31:15.251 [2024-11-20 18:05:42.319545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.251 [2024-11-20 18:05:42.319556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.449268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.449327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:31:15.510 [2024-11-20 18:05:42.449343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.449362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.552334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.552393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:31:15.510 [2024-11-20 18:05:42.552409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.552421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.552555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.552568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:31:15.510 [2024-11-20 18:05:42.552580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.552591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.552644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.552657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:31:15.510 [2024-11-20 18:05:42.552674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.552697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.552863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.552880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:31:15.510 [2024-11-20 18:05:42.552891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.552903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.552946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.552959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:31:15.510 [2024-11-20 18:05:42.552970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.552986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.553033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.553045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:31:15.510 [2024-11-20 18:05:42.553055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.510 [2024-11-20 18:05:42.553067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.510 [2024-11-20 18:05:42.553118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:31:15.510 [2024-11-20 18:05:42.553130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:31:15.510 [2024-11-20 18:05:42.553145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:31:15.511 [2024-11-20 18:05:42.553156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:31:15.511 [2024-11-20 18:05:42.553298] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 413.638 ms, result 0 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:31:16.888 Remove shared memory files 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84146 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:31:16.888 ************************************ 00:31:16.888 END TEST ftl_upgrade_shutdown 00:31:16.888 ************************************ 00:31:16.888 00:31:16.888 real 1m28.932s 00:31:16.888 user 1m59.896s 00:31:16.888 sys 0m25.140s 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.888 18:05:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:31:16.888 Process with pid 76727 is not found 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@14 -- # killprocess 76727 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 76727 ']' 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@958 -- # kill -0 76727 00:31:16.888 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76727) - No such process 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76727 is not found' 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:31:16.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84621 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84621 00:31:16.888 18:05:44 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@835 -- # '[' -z 84621 ']' 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.888 18:05:44 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:17.147 [2024-11-20 18:05:44.130379] Starting SPDK v25.01-pre git sha1 09ac735c8 / DPDK 24.03.0 initialization... 00:31:17.147 [2024-11-20 18:05:44.130504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84621 ] 00:31:17.147 [2024-11-20 18:05:44.310967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.405 [2024-11-20 18:05:44.439753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.340 18:05:45 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:18.340 18:05:45 ftl -- common/autotest_common.sh@868 -- # return 0 00:31:18.340 18:05:45 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:18.598 nvme0n1 00:31:18.599 18:05:45 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:31:18.599 18:05:45 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:18.599 18:05:45 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:18.857 18:05:45 ftl -- ftl/common.sh@28 -- # stores=5c35f0c6-9a77-4d87-aafa-1f917d6b2195 00:31:18.857 18:05:45 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:31:18.857 18:05:45 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5c35f0c6-9a77-4d87-aafa-1f917d6b2195 00:31:19.138 18:05:46 ftl -- ftl/ftl.sh@23 -- # killprocess 84621 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@954 -- # '[' -z 84621 ']' 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@958 -- # kill -0 84621 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@959 -- # uname 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84621 00:31:19.138 killing process with pid 84621 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84621' 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@973 -- # kill 84621 00:31:19.138 18:05:46 ftl -- common/autotest_common.sh@978 -- # wait 84621 00:31:21.673 18:05:48 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:31:21.931 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:22.190 Waiting for block devices as requested 00:31:22.190 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:31:22.190 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:31:22.449 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:31:22.449 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:31:27.721 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:31:27.722 18:05:54 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:31:27.722 18:05:54 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:31:27.722 Remove shared memory files 00:31:27.722 18:05:54 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:31:27.722 18:05:54 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:31:27.722 18:05:54 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:31:27.722 18:05:54 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:31:27.722 18:05:54 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:31:27.722 ************************************ 00:31:27.722 END TEST ftl 00:31:27.722 ************************************ 00:31:27.722 00:31:27.722 real 11m36.956s 00:31:27.722 user 14m2.625s 00:31:27.722 sys 1m34.627s 00:31:27.722 18:05:54 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.722 18:05:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:31:27.722 18:05:54 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:31:27.722 18:05:54 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:31:27.722 18:05:54 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:31:27.722 18:05:54 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:31:27.722 18:05:54 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:31:27.722 18:05:54 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:31:27.722 18:05:54 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:31:27.722 18:05:54 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:31:27.722 18:05:54 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:31:27.722 18:05:54 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:31:27.722 18:05:54 -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:27.722 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:27.722 18:05:54 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:31:27.722 18:05:54 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:31:27.722 18:05:54 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:31:27.722 18:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:30.259 INFO: APP EXITING 00:31:30.259 INFO: killing all VMs 00:31:30.259 INFO: killing vhost app 00:31:30.259 INFO: EXIT DONE 00:31:30.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:30.827 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:31:30.827 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:31:30.827 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:31:30.827 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:31:31.397 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:31.965 Cleaning 00:31:31.965 Removing: /var/run/dpdk/spdk0/config 00:31:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:31.965 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:31.965 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:31.965 Removing: /var/run/dpdk/spdk0 00:31:31.965 Removing: /var/run/dpdk/spdk_pid57527 00:31:31.965 Removing: /var/run/dpdk/spdk_pid57770 00:31:31.965 Removing: /var/run/dpdk/spdk_pid57999 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58103 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58159 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58298 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58316 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58526 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58638 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58745 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58871 00:31:31.965 Removing: /var/run/dpdk/spdk_pid58979 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59020 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59056 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59127 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59255 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59714 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59790 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59864 00:31:31.965 Removing: /var/run/dpdk/spdk_pid59889 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60039 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60066 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60214 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60236 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60305 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60323 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60393 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60411 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60606 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60648 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60737 00:31:31.965 Removing: /var/run/dpdk/spdk_pid60926 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61026 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61069 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61521 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61630 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61739 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61798 00:31:31.965 Removing: /var/run/dpdk/spdk_pid61824 00:31:31.966 Removing: /var/run/dpdk/spdk_pid61909 00:31:31.966 Removing: /var/run/dpdk/spdk_pid62558 00:31:32.223 Removing: /var/run/dpdk/spdk_pid62600 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63090 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63194 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63314 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63367 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63393 00:31:32.223 Removing: /var/run/dpdk/spdk_pid63418 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65319 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65473 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65477 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65489 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65541 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65545 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65557 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65607 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65611 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65623 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65668 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65672 00:31:32.223 Removing: /var/run/dpdk/spdk_pid65684 00:31:32.223 Removing: /var/run/dpdk/spdk_pid67105 00:31:32.223 Removing: /var/run/dpdk/spdk_pid67219 00:31:32.223 Removing: /var/run/dpdk/spdk_pid68656 00:31:32.223 Removing: /var/run/dpdk/spdk_pid70402 00:31:32.223 Removing: /var/run/dpdk/spdk_pid70482 00:31:32.223 Removing: /var/run/dpdk/spdk_pid70558 00:31:32.223 Removing: /var/run/dpdk/spdk_pid70674 00:31:32.224 Removing: /var/run/dpdk/spdk_pid70767 00:31:32.224 Removing: /var/run/dpdk/spdk_pid70875 00:31:32.224 Removing: /var/run/dpdk/spdk_pid70959 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71035 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71145 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71242 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71338 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71423 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71504 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71615 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71712 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71812 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71894 00:31:32.224 Removing: /var/run/dpdk/spdk_pid71975 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72085 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72182 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72282 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72363 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72443 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72519 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72604 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72719 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72812 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72912 00:31:32.224 Removing: /var/run/dpdk/spdk_pid72993 00:31:32.224 Removing: /var/run/dpdk/spdk_pid73075 00:31:32.224 Removing: /var/run/dpdk/spdk_pid73149 00:31:32.224 Removing: /var/run/dpdk/spdk_pid73229 00:31:32.224 Removing: /var/run/dpdk/spdk_pid73338 00:31:32.224 Removing: /var/run/dpdk/spdk_pid73434 00:31:32.483 Removing: /var/run/dpdk/spdk_pid73584 00:31:32.483 Removing: /var/run/dpdk/spdk_pid73879 00:31:32.483 Removing: /var/run/dpdk/spdk_pid73925 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74387 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74573 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74682 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74796 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74855 00:31:32.483 Removing: /var/run/dpdk/spdk_pid74886 00:31:32.483 Removing: /var/run/dpdk/spdk_pid75178 00:31:32.483 Removing: /var/run/dpdk/spdk_pid75248 00:31:32.483 Removing: /var/run/dpdk/spdk_pid75337 00:31:32.483 Removing: /var/run/dpdk/spdk_pid75764 00:31:32.483 Removing: /var/run/dpdk/spdk_pid75910 00:31:32.483 Removing: /var/run/dpdk/spdk_pid76727 00:31:32.483 Removing: /var/run/dpdk/spdk_pid76870 00:31:32.483 Removing: /var/run/dpdk/spdk_pid77073 00:31:32.483 Removing: /var/run/dpdk/spdk_pid77181 00:31:32.483 Removing: /var/run/dpdk/spdk_pid77518 00:31:32.483 Removing: /var/run/dpdk/spdk_pid77773 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78141 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78355 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78506 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78577 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78727 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78763 00:31:32.483 Removing: /var/run/dpdk/spdk_pid78829 00:31:32.483 Removing: /var/run/dpdk/spdk_pid79038 00:31:32.483 Removing: /var/run/dpdk/spdk_pid79284 00:31:32.483 Removing: /var/run/dpdk/spdk_pid79742 00:31:32.483 Removing: /var/run/dpdk/spdk_pid80200 00:31:32.483 Removing: /var/run/dpdk/spdk_pid80657 00:31:32.483 Removing: /var/run/dpdk/spdk_pid81180 00:31:32.483 Removing: /var/run/dpdk/spdk_pid81344 00:31:32.483 Removing: /var/run/dpdk/spdk_pid81437 00:31:32.483 Removing: /var/run/dpdk/spdk_pid82077 00:31:32.483 Removing: /var/run/dpdk/spdk_pid82151 00:31:32.483 Removing: /var/run/dpdk/spdk_pid82655 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83046 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83572 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83700 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83764 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83828 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83885 00:31:32.483 Removing: /var/run/dpdk/spdk_pid83949 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84146 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84236 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84303 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84364 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84399 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84489 00:31:32.483 Removing: /var/run/dpdk/spdk_pid84621 00:31:32.483 Clean 00:31:32.742 18:05:59 -- common/autotest_common.sh@1453 -- # return 0 00:31:32.742 18:05:59 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:31:32.742 18:05:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.742 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:31:32.742 18:05:59 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:31:32.742 18:05:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:32.742 18:05:59 -- common/autotest_common.sh@10 -- # set +x 00:31:32.742 18:05:59 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:31:32.742 18:05:59 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:31:32.742 18:05:59 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:31:32.742 18:05:59 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:31:32.742 18:05:59 -- spdk/autotest.sh@398 -- # hostname 00:31:32.742 18:05:59 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:31:33.001 geninfo: WARNING: invalid characters removed from testname! 00:31:59.626 18:06:25 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:02.163 18:06:28 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:04.067 18:06:31 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:06.602 18:06:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:08.507 18:06:35 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:10.415 18:06:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:32:12.948 18:06:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:32:12.948 18:06:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:32:12.948 18:06:39 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:32:12.948 18:06:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:32:12.948 18:06:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:32:12.948 18:06:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:32:12.948 + [[ -n 5255 ]] 00:32:12.948 + sudo kill 5255 00:32:12.957 [Pipeline] } 00:32:12.972 [Pipeline] // timeout 00:32:12.977 [Pipeline] } 00:32:12.990 [Pipeline] // stage 00:32:12.995 [Pipeline] } 00:32:13.008 [Pipeline] // catchError 00:32:13.017 [Pipeline] stage 00:32:13.019 [Pipeline] { (Stop VM) 00:32:13.030 [Pipeline] sh 00:32:13.311 + vagrant halt 00:32:16.673 ==> default: Halting domain... 00:32:23.256 [Pipeline] sh 00:32:23.542 + vagrant destroy -f 00:32:26.082 ==> default: Removing domain... 00:32:26.665 [Pipeline] sh 00:32:26.952 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:32:26.961 [Pipeline] } 00:32:26.979 [Pipeline] // stage 00:32:26.984 [Pipeline] } 00:32:26.998 [Pipeline] // dir 00:32:27.004 [Pipeline] } 00:32:27.018 [Pipeline] // wrap 00:32:27.023 [Pipeline] } 00:32:27.064 [Pipeline] // catchError 00:32:27.073 [Pipeline] stage 00:32:27.075 [Pipeline] { (Epilogue) 00:32:27.087 [Pipeline] sh 00:32:27.369 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:32:32.673 [Pipeline] catchError 00:32:32.675 [Pipeline] { 00:32:32.690 [Pipeline] sh 00:32:32.973 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:32:32.973 Artifacts sizes are good 00:32:32.982 [Pipeline] } 00:32:32.999 [Pipeline] // catchError 00:32:33.012 [Pipeline] archiveArtifacts 00:32:33.020 Archiving artifacts 00:32:33.136 [Pipeline] cleanWs 00:32:33.149 [WS-CLEANUP] Deleting project workspace... 00:32:33.149 [WS-CLEANUP] Deferred wipeout is used... 00:32:33.156 [WS-CLEANUP] done 00:32:33.158 [Pipeline] } 00:32:33.175 [Pipeline] // stage 00:32:33.181 [Pipeline] } 00:32:33.196 [Pipeline] // node 00:32:33.203 [Pipeline] End of Pipeline 00:32:33.249 Finished: SUCCESS