00:00:00.000 Started by upstream project "autotest-per-patch" build number 132397 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.083 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:03.308 The recommended git tool is: git 00:00:03.309 using credential 00000000-0000-0000-0000-000000000002 00:00:03.310 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:03.323 Fetching changes from the remote Git repository 00:00:03.325 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:03.336 Using shallow fetch with depth 1 00:00:03.336 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:03.336 > git --version # timeout=10 00:00:03.347 > git --version # 'git version 2.39.2' 00:00:03.347 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:03.359 Setting http proxy: proxy-dmz.intel.com:911 00:00:03.359 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.350 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.365 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.380 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.380 > git config core.sparsecheckout # timeout=10 00:00:09.392 > git read-tree -mu HEAD # timeout=10 00:00:09.411 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.432 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.432 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.519 [Pipeline] Start of Pipeline 00:00:09.532 [Pipeline] library 00:00:09.533 Loading library shm_lib@master 00:00:09.533 Library shm_lib@master is cached. Copying from home. 00:00:09.548 [Pipeline] node 00:00:24.549 Still waiting to schedule task 00:00:24.550 Waiting for next available executor on ‘vagrant-vm-host’ 00:16:38.484 Running on VM-host-WFP7 in /var/jenkins/workspace/nvme-vg-autotest 00:16:38.486 [Pipeline] { 00:16:38.502 [Pipeline] catchError 00:16:38.505 [Pipeline] { 00:16:38.519 [Pipeline] wrap 00:16:38.528 [Pipeline] { 00:16:38.536 [Pipeline] stage 00:16:38.543 [Pipeline] { (Prologue) 00:16:38.563 [Pipeline] echo 00:16:38.564 Node: VM-host-WFP7 00:16:38.571 [Pipeline] cleanWs 00:16:38.582 [WS-CLEANUP] Deleting project workspace... 00:16:38.582 [WS-CLEANUP] Deferred wipeout is used... 00:16:38.589 [WS-CLEANUP] done 00:16:38.797 [Pipeline] setCustomBuildProperty 00:16:38.890 [Pipeline] httpRequest 00:16:39.206 [Pipeline] echo 00:16:39.208 Sorcerer 10.211.164.20 is alive 00:16:39.217 [Pipeline] retry 00:16:39.219 [Pipeline] { 00:16:39.236 [Pipeline] httpRequest 00:16:39.242 HttpMethod: GET 00:16:39.243 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:39.243 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:39.244 Response Code: HTTP/1.1 200 OK 00:16:39.244 Success: Status code 200 is in the accepted range: 200,404 00:16:39.245 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:39.391 [Pipeline] } 00:16:39.409 [Pipeline] // retry 00:16:39.416 [Pipeline] sh 00:16:39.696 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:16:39.712 [Pipeline] httpRequest 00:16:40.022 [Pipeline] echo 00:16:40.025 Sorcerer 10.211.164.20 is alive 00:16:40.035 [Pipeline] retry 00:16:40.038 [Pipeline] { 00:16:40.052 [Pipeline] httpRequest 00:16:40.056 HttpMethod: GET 00:16:40.057 URL: http://10.211.164.20/packages/spdk_d581148517718459f31027c0ff6fcdad8e686ee9.tar.gz 00:16:40.058 Sending request to url: http://10.211.164.20/packages/spdk_d581148517718459f31027c0ff6fcdad8e686ee9.tar.gz 00:16:40.060 Response Code: HTTP/1.1 200 OK 00:16:40.061 Success: Status code 200 is in the accepted range: 200,404 00:16:40.062 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_d581148517718459f31027c0ff6fcdad8e686ee9.tar.gz 00:16:42.333 [Pipeline] } 00:16:42.353 [Pipeline] // retry 00:16:42.361 [Pipeline] sh 00:16:42.644 + tar --no-same-owner -xf spdk_d581148517718459f31027c0ff6fcdad8e686ee9.tar.gz 00:16:45.947 [Pipeline] sh 00:16:46.282 + git -C spdk log --oneline -n5 00:16:46.282 d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:16:46.282 32c3f377c bdev: Use data_block_size for upper layer buffer if hide_metadata is true 00:16:46.282 d3dfde872 bdev: Add APIs get metadata config via desc depending on hide_metadata option 00:16:46.282 b6a8866f3 bdev: Add spdk_bdev_open_ext_v2() to support per-open options 00:16:46.282 3bdf5e874 bdev: Locate all hot data in spdk_bdev_desc to the first cache line 00:16:46.304 [Pipeline] writeFile 00:16:46.319 [Pipeline] sh 00:16:46.604 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:16:46.616 [Pipeline] sh 00:16:46.914 + cat autorun-spdk.conf 00:16:46.914 SPDK_RUN_FUNCTIONAL_TEST=1 00:16:46.914 SPDK_TEST_NVME=1 00:16:46.914 SPDK_TEST_FTL=1 00:16:46.914 SPDK_TEST_ISAL=1 00:16:46.914 SPDK_RUN_ASAN=1 00:16:46.914 SPDK_RUN_UBSAN=1 00:16:46.914 SPDK_TEST_XNVME=1 00:16:46.914 SPDK_TEST_NVME_FDP=1 00:16:46.914 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:46.920 RUN_NIGHTLY=0 00:16:46.923 [Pipeline] } 00:16:46.936 [Pipeline] // stage 00:16:46.951 [Pipeline] stage 00:16:46.953 [Pipeline] { (Run VM) 00:16:46.966 [Pipeline] sh 00:16:47.247 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:16:47.247 + echo 'Start stage prepare_nvme.sh' 00:16:47.247 Start stage prepare_nvme.sh 00:16:47.247 + [[ -n 5 ]] 00:16:47.247 + disk_prefix=ex5 00:16:47.247 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:16:47.247 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:16:47.247 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:16:47.247 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:16:47.247 ++ SPDK_TEST_NVME=1 00:16:47.247 ++ SPDK_TEST_FTL=1 00:16:47.247 ++ SPDK_TEST_ISAL=1 00:16:47.247 ++ SPDK_RUN_ASAN=1 00:16:47.247 ++ SPDK_RUN_UBSAN=1 00:16:47.247 ++ SPDK_TEST_XNVME=1 00:16:47.247 ++ SPDK_TEST_NVME_FDP=1 00:16:47.247 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:16:47.247 ++ RUN_NIGHTLY=0 00:16:47.247 + cd /var/jenkins/workspace/nvme-vg-autotest 00:16:47.247 + nvme_files=() 00:16:47.247 + declare -A nvme_files 00:16:47.247 + backend_dir=/var/lib/libvirt/images/backends 00:16:47.247 + nvme_files['nvme.img']=5G 00:16:47.247 + nvme_files['nvme-cmb.img']=5G 00:16:47.247 + nvme_files['nvme-multi0.img']=4G 00:16:47.247 + nvme_files['nvme-multi1.img']=4G 00:16:47.247 + nvme_files['nvme-multi2.img']=4G 00:16:47.247 + nvme_files['nvme-openstack.img']=8G 00:16:47.247 + nvme_files['nvme-zns.img']=5G 00:16:47.247 + (( SPDK_TEST_NVME_PMR == 1 )) 00:16:47.247 + (( SPDK_TEST_FTL == 1 )) 00:16:47.247 + nvme_files["nvme-ftl.img"]=6G 00:16:47.247 + (( SPDK_TEST_NVME_FDP == 1 )) 00:16:47.247 + nvme_files["nvme-fdp.img"]=1G 00:16:47.247 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:16:47.247 + for nvme in "${!nvme_files[@]}" 00:16:47.247 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:16:47.247 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:16:47.247 + for nvme in "${!nvme_files[@]}" 00:16:47.247 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-ftl.img -s 6G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:16:47.507 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:16:47.507 + for nvme in "${!nvme_files[@]}" 00:16:47.507 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-fdp.img -s 1G 00:16:47.766 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:16:47.766 + for nvme in "${!nvme_files[@]}" 00:16:47.766 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:16:47.766 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:16:47.766 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:16:47.766 + echo 'End stage prepare_nvme.sh' 00:16:47.766 End stage prepare_nvme.sh 00:16:47.779 [Pipeline] sh 00:16:48.066 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:16:48.066 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex5-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex5-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:16:48.066 00:16:48.066 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:16:48.066 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:16:48.066 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:16:48.066 HELP=0 00:16:48.066 DRY_RUN=0 00:16:48.066 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,/var/lib/libvirt/images/backends/ex5-nvme-fdp.img, 00:16:48.066 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:16:48.066 NVME_AUTO_CREATE=0 00:16:48.066 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,, 00:16:48.066 NVME_CMB=,,,, 00:16:48.066 NVME_PMR=,,,, 00:16:48.066 NVME_ZNS=,,,, 00:16:48.066 NVME_MS=true,,,, 00:16:48.066 NVME_FDP=,,,on, 00:16:48.066 SPDK_VAGRANT_DISTRO=fedora39 00:16:48.066 SPDK_VAGRANT_VMCPU=10 00:16:48.066 SPDK_VAGRANT_VMRAM=12288 00:16:48.066 SPDK_VAGRANT_PROVIDER=libvirt 00:16:48.066 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:16:48.066 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:16:48.066 SPDK_OPENSTACK_NETWORK=0 00:16:48.066 VAGRANT_PACKAGE_BOX=0 00:16:48.066 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:16:48.066 FORCE_DISTRO=true 00:16:48.066 VAGRANT_BOX_VERSION= 00:16:48.066 EXTRA_VAGRANTFILES= 00:16:48.066 NIC_MODEL=virtio 00:16:48.066 00:16:48.066 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:16:48.066 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:16:50.603 Bringing machine 'default' up with 'libvirt' provider... 00:16:51.174 ==> default: Creating image (snapshot of base box volume). 00:16:51.174 ==> default: Creating domain with the following settings... 00:16:51.174 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732109818_42f716cbdd9fd842dcc6 00:16:51.174 ==> default: -- Domain type: kvm 00:16:51.174 ==> default: -- Cpus: 10 00:16:51.174 ==> default: -- Feature: acpi 00:16:51.174 ==> default: -- Feature: apic 00:16:51.175 ==> default: -- Feature: pae 00:16:51.175 ==> default: -- Memory: 12288M 00:16:51.175 ==> default: -- Memory Backing: hugepages: 00:16:51.175 ==> default: -- Management MAC: 00:16:51.175 ==> default: -- Loader: 00:16:51.175 ==> default: -- Nvram: 00:16:51.175 ==> default: -- Base box: spdk/fedora39 00:16:51.175 ==> default: -- Storage pool: default 00:16:51.175 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732109818_42f716cbdd9fd842dcc6.img (20G) 00:16:51.175 ==> default: -- Volume Cache: default 00:16:51.175 ==> default: -- Kernel: 00:16:51.175 ==> default: -- Initrd: 00:16:51.175 ==> default: -- Graphics Type: vnc 00:16:51.175 ==> default: -- Graphics Port: -1 00:16:51.175 ==> default: -- Graphics IP: 127.0.0.1 00:16:51.175 ==> default: -- Graphics Password: Not defined 00:16:51.175 ==> default: -- Video Type: cirrus 00:16:51.175 ==> default: -- Video VRAM: 9216 00:16:51.175 ==> default: -- Sound Type: 00:16:51.175 ==> default: -- Keymap: en-us 00:16:51.175 ==> default: -- TPM Path: 00:16:51.175 ==> default: -- INPUT: type=mouse, bus=ps2 00:16:51.175 ==> default: -- Command line args: 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-1-drive0, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:16:51.175 ==> default: -> value=-drive, 00:16:51.175 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:16:51.175 ==> default: -> value=-device, 00:16:51.175 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:16:51.435 ==> default: Creating shared folders metadata... 00:16:51.435 ==> default: Starting domain. 00:16:52.811 ==> default: Waiting for domain to get an IP address... 00:17:07.696 ==> default: Waiting for SSH to become available... 00:17:09.076 ==> default: Configuring and enabling network interfaces... 00:17:15.648 default: SSH address: 192.168.121.27:22 00:17:15.648 default: SSH username: vagrant 00:17:15.648 default: SSH auth method: private key 00:17:17.024 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:17:25.140 ==> default: Mounting SSHFS shared folder... 00:17:26.515 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:17:26.515 ==> default: Checking Mount.. 00:17:27.894 ==> default: Folder Successfully Mounted! 00:17:27.894 ==> default: Running provisioner: file... 00:17:28.830 default: ~/.gitconfig => .gitconfig 00:17:29.399 00:17:29.399 SUCCESS! 00:17:29.399 00:17:29.399 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:17:29.399 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:17:29.399 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:17:29.399 00:17:29.408 [Pipeline] } 00:17:29.423 [Pipeline] // stage 00:17:29.432 [Pipeline] dir 00:17:29.433 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:17:29.434 [Pipeline] { 00:17:29.447 [Pipeline] catchError 00:17:29.449 [Pipeline] { 00:17:29.462 [Pipeline] sh 00:17:29.744 + vagrant ssh-config --host vagrant 00:17:29.744 + sed -ne /^Host/,$p 00:17:29.744 + tee ssh_conf 00:17:33.031 Host vagrant 00:17:33.031 HostName 192.168.121.27 00:17:33.031 User vagrant 00:17:33.031 Port 22 00:17:33.031 UserKnownHostsFile /dev/null 00:17:33.031 StrictHostKeyChecking no 00:17:33.031 PasswordAuthentication no 00:17:33.031 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:17:33.031 IdentitiesOnly yes 00:17:33.031 LogLevel FATAL 00:17:33.031 ForwardAgent yes 00:17:33.031 ForwardX11 yes 00:17:33.031 00:17:33.044 [Pipeline] withEnv 00:17:33.046 [Pipeline] { 00:17:33.059 [Pipeline] sh 00:17:33.340 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:17:33.340 source /etc/os-release 00:17:33.340 [[ -e /image.version ]] && img=$(< /image.version) 00:17:33.340 # Minimal, systemd-like check. 00:17:33.340 if [[ -e /.dockerenv ]]; then 00:17:33.340 # Clear garbage from the node's name: 00:17:33.340 # agt-er_autotest_547-896 -> autotest_547-896 00:17:33.340 # $HOSTNAME is the actual container id 00:17:33.340 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:17:33.340 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:17:33.340 # We can assume this is a mount from a host where container is running, 00:17:33.340 # so fetch its hostname to easily identify the target swarm worker. 00:17:33.340 container="$(< /etc/hostname) ($agent)" 00:17:33.340 else 00:17:33.340 # Fallback 00:17:33.340 container=$agent 00:17:33.340 fi 00:17:33.340 fi 00:17:33.340 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:17:33.340 00:17:33.609 [Pipeline] } 00:17:33.624 [Pipeline] // withEnv 00:17:33.631 [Pipeline] setCustomBuildProperty 00:17:33.647 [Pipeline] stage 00:17:33.649 [Pipeline] { (Tests) 00:17:33.665 [Pipeline] sh 00:17:33.944 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:17:34.219 [Pipeline] sh 00:17:34.496 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:17:34.772 [Pipeline] timeout 00:17:34.772 Timeout set to expire in 50 min 00:17:34.774 [Pipeline] { 00:17:34.790 [Pipeline] sh 00:17:35.073 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:17:35.641 HEAD is now at d58114851 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:17:35.653 [Pipeline] sh 00:17:35.936 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:17:36.209 [Pipeline] sh 00:17:36.492 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:17:36.767 [Pipeline] sh 00:17:37.049 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:17:37.307 ++ readlink -f spdk_repo 00:17:37.307 + DIR_ROOT=/home/vagrant/spdk_repo 00:17:37.307 + [[ -n /home/vagrant/spdk_repo ]] 00:17:37.307 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:17:37.307 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:17:37.307 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:17:37.307 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:17:37.307 + [[ -d /home/vagrant/spdk_repo/output ]] 00:17:37.307 + [[ nvme-vg-autotest == pkgdep-* ]] 00:17:37.308 + cd /home/vagrant/spdk_repo 00:17:37.308 + source /etc/os-release 00:17:37.308 ++ NAME='Fedora Linux' 00:17:37.308 ++ VERSION='39 (Cloud Edition)' 00:17:37.308 ++ ID=fedora 00:17:37.308 ++ VERSION_ID=39 00:17:37.308 ++ VERSION_CODENAME= 00:17:37.308 ++ PLATFORM_ID=platform:f39 00:17:37.308 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:17:37.308 ++ ANSI_COLOR='0;38;2;60;110;180' 00:17:37.308 ++ LOGO=fedora-logo-icon 00:17:37.308 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:17:37.308 ++ HOME_URL=https://fedoraproject.org/ 00:17:37.308 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:17:37.308 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:17:37.308 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:17:37.308 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:17:37.308 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:17:37.308 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:17:37.308 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:17:37.308 ++ SUPPORT_END=2024-11-12 00:17:37.308 ++ VARIANT='Cloud Edition' 00:17:37.308 ++ VARIANT_ID=cloud 00:17:37.308 + uname -a 00:17:37.308 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:17:37.308 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:17:37.566 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:38.135 Hugepages 00:17:38.135 node hugesize free / total 00:17:38.135 node0 1048576kB 0 / 0 00:17:38.135 node0 2048kB 0 / 0 00:17:38.135 00:17:38.135 Type BDF Vendor Device NUMA Driver Device Block devices 00:17:38.135 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:17:38.135 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:17:38.135 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:17:38.135 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:17:38.135 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:17:38.135 + rm -f /tmp/spdk-ld-path 00:17:38.135 + source autorun-spdk.conf 00:17:38.135 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:38.135 ++ SPDK_TEST_NVME=1 00:17:38.135 ++ SPDK_TEST_FTL=1 00:17:38.135 ++ SPDK_TEST_ISAL=1 00:17:38.135 ++ SPDK_RUN_ASAN=1 00:17:38.135 ++ SPDK_RUN_UBSAN=1 00:17:38.135 ++ SPDK_TEST_XNVME=1 00:17:38.135 ++ SPDK_TEST_NVME_FDP=1 00:17:38.135 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:38.135 ++ RUN_NIGHTLY=0 00:17:38.135 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:17:38.135 + [[ -n '' ]] 00:17:38.135 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:17:38.135 + for M in /var/spdk/build-*-manifest.txt 00:17:38.135 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:17:38.135 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:38.135 + for M in /var/spdk/build-*-manifest.txt 00:17:38.135 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:17:38.135 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:38.135 + for M in /var/spdk/build-*-manifest.txt 00:17:38.135 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:17:38.135 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:17:38.135 ++ uname 00:17:38.135 + [[ Linux == \L\i\n\u\x ]] 00:17:38.135 + sudo dmesg -T 00:17:38.135 + sudo dmesg --clear 00:17:38.394 + dmesg_pid=5466 00:17:38.394 + [[ Fedora Linux == FreeBSD ]] 00:17:38.394 + sudo dmesg -Tw 00:17:38.394 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:38.394 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:17:38.394 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:17:38.394 + [[ -x /usr/src/fio-static/fio ]] 00:17:38.394 + export FIO_BIN=/usr/src/fio-static/fio 00:17:38.394 + FIO_BIN=/usr/src/fio-static/fio 00:17:38.394 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:17:38.394 + [[ ! -v VFIO_QEMU_BIN ]] 00:17:38.394 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:17:38.394 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:38.394 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:17:38.394 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:17:38.394 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:38.394 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:17:38.394 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:38.394 13:37:45 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:17:38.394 13:37:45 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:17:38.394 13:37:45 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:17:38.394 13:37:45 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:17:38.394 13:37:45 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:17:38.394 13:37:46 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:17:38.394 13:37:46 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:38.394 13:37:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:17:38.394 13:37:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:17:38.394 13:37:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.394 13:37:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.394 13:37:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.394 13:37:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.394 13:37:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.394 13:37:46 -- paths/export.sh@5 -- $ export PATH 00:17:38.394 13:37:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.394 13:37:46 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:17:38.394 13:37:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:17:38.394 13:37:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732109866.XXXXXX 00:17:38.394 13:37:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732109866.wyMVgl 00:17:38.394 13:37:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:17:38.394 13:37:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:17:38.395 13:37:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:17:38.395 13:37:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:17:38.395 13:37:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:17:38.395 13:37:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:17:38.395 13:37:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:17:38.395 13:37:46 -- common/autotest_common.sh@10 -- $ set +x 00:17:38.395 13:37:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:17:38.395 13:37:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:17:38.395 13:37:46 -- pm/common@17 -- $ local monitor 00:17:38.395 13:37:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:38.395 13:37:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:17:38.395 13:37:46 -- pm/common@25 -- $ sleep 1 00:17:38.395 13:37:46 -- pm/common@21 -- $ date +%s 00:17:38.395 13:37:46 -- pm/common@21 -- $ date +%s 00:17:38.395 13:37:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109866 00:17:38.395 13:37:46 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732109866 00:17:38.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109866_collect-cpu-load.pm.log 00:17:38.654 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732109866_collect-vmstat.pm.log 00:17:39.591 13:37:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:17:39.591 13:37:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:17:39.591 13:37:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:17:39.591 13:37:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:17:39.591 13:37:47 -- spdk/autobuild.sh@16 -- $ date -u 00:17:39.591 Wed Nov 20 01:37:47 PM UTC 2024 00:17:39.591 13:37:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:17:39.591 v25.01-pre-224-gd58114851 00:17:39.591 13:37:47 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:17:39.591 13:37:47 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:17:39.591 13:37:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:39.591 13:37:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:39.591 13:37:47 -- common/autotest_common.sh@10 -- $ set +x 00:17:39.591 ************************************ 00:17:39.591 START TEST asan 00:17:39.591 ************************************ 00:17:39.591 using asan 00:17:39.591 13:37:47 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:17:39.591 00:17:39.591 real 0m0.000s 00:17:39.591 user 0m0.000s 00:17:39.591 sys 0m0.000s 00:17:39.591 13:37:47 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:39.591 13:37:47 asan -- common/autotest_common.sh@10 -- $ set +x 00:17:39.591 ************************************ 00:17:39.591 END TEST asan 00:17:39.591 ************************************ 00:17:39.591 13:37:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:17:39.591 13:37:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:17:39.591 13:37:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:17:39.591 13:37:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:17:39.591 13:37:47 -- common/autotest_common.sh@10 -- $ set +x 00:17:39.591 ************************************ 00:17:39.591 START TEST ubsan 00:17:39.591 ************************************ 00:17:39.591 using ubsan 00:17:39.591 13:37:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:17:39.591 00:17:39.591 real 0m0.000s 00:17:39.591 user 0m0.000s 00:17:39.591 sys 0m0.000s 00:17:39.591 13:37:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:17:39.591 13:37:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:17:39.591 ************************************ 00:17:39.591 END TEST ubsan 00:17:39.591 ************************************ 00:17:39.591 13:37:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:17:39.591 13:37:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:17:39.591 13:37:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:17:39.591 13:37:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:17:39.850 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:17:39.850 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:17:40.418 Using 'verbs' RDMA provider 00:17:56.249 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:18:14.367 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:18:14.367 Creating mk/config.mk...done. 00:18:14.367 Creating mk/cc.flags.mk...done. 00:18:14.367 Type 'make' to build. 00:18:14.367 13:38:20 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:18:14.367 13:38:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:18:14.367 13:38:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:18:14.367 13:38:20 -- common/autotest_common.sh@10 -- $ set +x 00:18:14.367 ************************************ 00:18:14.367 START TEST make 00:18:14.367 ************************************ 00:18:14.367 13:38:20 make -- common/autotest_common.sh@1129 -- $ make -j10 00:18:14.367 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:18:14.367 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:18:14.367 meson setup builddir \ 00:18:14.367 -Dwith-libaio=enabled \ 00:18:14.367 -Dwith-liburing=enabled \ 00:18:14.367 -Dwith-libvfn=disabled \ 00:18:14.367 -Dwith-spdk=disabled \ 00:18:14.367 -Dexamples=false \ 00:18:14.367 -Dtests=false \ 00:18:14.367 -Dtools=false && \ 00:18:14.367 meson compile -C builddir && \ 00:18:14.367 cd -) 00:18:14.367 make[1]: Nothing to be done for 'all'. 00:18:15.745 The Meson build system 00:18:15.745 Version: 1.5.0 00:18:15.745 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:18:15.745 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:18:15.745 Build type: native build 00:18:15.745 Project name: xnvme 00:18:15.745 Project version: 0.7.5 00:18:15.745 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:18:15.745 C linker for the host machine: cc ld.bfd 2.40-14 00:18:15.745 Host machine cpu family: x86_64 00:18:15.745 Host machine cpu: x86_64 00:18:15.745 Message: host_machine.system: linux 00:18:15.745 Compiler for C supports arguments -Wno-missing-braces: YES 00:18:15.745 Compiler for C supports arguments -Wno-cast-function-type: YES 00:18:15.745 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:18:15.745 Run-time dependency threads found: YES 00:18:15.745 Has header "setupapi.h" : NO 00:18:15.745 Has header "linux/blkzoned.h" : YES 00:18:15.745 Has header "linux/blkzoned.h" : YES (cached) 00:18:15.745 Has header "libaio.h" : YES 00:18:15.745 Library aio found: YES 00:18:15.745 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:18:15.745 Run-time dependency liburing found: YES 2.2 00:18:15.745 Dependency libvfn skipped: feature with-libvfn disabled 00:18:15.745 Found CMake: /usr/bin/cmake (3.27.7) 00:18:15.745 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:18:15.745 Subproject spdk : skipped: feature with-spdk disabled 00:18:15.745 Run-time dependency appleframeworks found: NO (tried framework) 00:18:15.745 Run-time dependency appleframeworks found: NO (tried framework) 00:18:15.745 Library rt found: YES 00:18:15.745 Checking for function "clock_gettime" with dependency -lrt: YES 00:18:15.745 Configuring xnvme_config.h using configuration 00:18:15.745 Configuring xnvme.spec using configuration 00:18:15.745 Run-time dependency bash-completion found: YES 2.11 00:18:15.745 Message: Bash-completions: /usr/share/bash-completion/completions 00:18:15.745 Program cp found: YES (/usr/bin/cp) 00:18:15.745 Build targets in project: 3 00:18:15.745 00:18:15.745 xnvme 0.7.5 00:18:15.745 00:18:15.745 Subprojects 00:18:15.745 spdk : NO Feature 'with-spdk' disabled 00:18:15.745 00:18:15.745 User defined options 00:18:15.745 examples : false 00:18:15.745 tests : false 00:18:15.745 tools : false 00:18:15.745 with-libaio : enabled 00:18:15.745 with-liburing: enabled 00:18:15.745 with-libvfn : disabled 00:18:15.745 with-spdk : disabled 00:18:15.745 00:18:15.745 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:16.003 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:18:16.003 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:18:16.262 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:18:16.262 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:18:16.262 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:18:16.262 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:18:16.262 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:18:16.262 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:18:16.262 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:18:16.262 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:18:16.262 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:18:16.262 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:18:16.262 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:18:16.262 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:18:16.262 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:18:16.262 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:18:16.262 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:18:16.521 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:18:16.521 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:18:16.521 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:18:16.521 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:18:16.521 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:18:16.521 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:18:16.521 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:18:16.521 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:18:16.521 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:18:16.521 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:18:16.521 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:18:16.521 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:18:16.521 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:18:16.521 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:18:16.521 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:18:16.521 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:18:16.521 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:18:16.521 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:18:16.521 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:18:16.521 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:18:16.521 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:18:16.521 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:18:16.521 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:18:16.521 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:18:16.521 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:18:16.521 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:18:16.521 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:18:16.521 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:18:16.521 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:18:16.780 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:18:16.780 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:18:16.780 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:18:16.780 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:18:16.780 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:18:16.780 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:18:16.780 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:18:16.780 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:18:16.780 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:18:16.780 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:18:16.780 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:18:16.780 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:18:16.780 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:18:16.780 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:18:16.780 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:18:16.780 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:18:16.780 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:18:16.780 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:18:16.780 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:18:17.038 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:18:17.038 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:18:17.038 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:18:17.038 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:18:17.038 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:18:17.038 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:18:17.038 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:18:17.038 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:18:17.038 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:18:17.604 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:18:17.604 [75/76] Linking static target lib/libxnvme.a 00:18:17.604 [76/76] Linking target lib/libxnvme.so.0.7.5 00:18:17.604 INFO: autodetecting backend as ninja 00:18:17.604 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:18:17.604 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:18:25.820 The Meson build system 00:18:25.820 Version: 1.5.0 00:18:25.820 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:18:25.820 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:18:25.820 Build type: native build 00:18:25.820 Program cat found: YES (/usr/bin/cat) 00:18:25.820 Project name: DPDK 00:18:25.820 Project version: 24.03.0 00:18:25.820 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:18:25.820 C linker for the host machine: cc ld.bfd 2.40-14 00:18:25.820 Host machine cpu family: x86_64 00:18:25.820 Host machine cpu: x86_64 00:18:25.820 Message: ## Building in Developer Mode ## 00:18:25.820 Program pkg-config found: YES (/usr/bin/pkg-config) 00:18:25.820 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:18:25.820 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:18:25.820 Program python3 found: YES (/usr/bin/python3) 00:18:25.820 Program cat found: YES (/usr/bin/cat) 00:18:25.820 Compiler for C supports arguments -march=native: YES 00:18:25.820 Checking for size of "void *" : 8 00:18:25.820 Checking for size of "void *" : 8 (cached) 00:18:25.820 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:18:25.820 Library m found: YES 00:18:25.820 Library numa found: YES 00:18:25.820 Has header "numaif.h" : YES 00:18:25.820 Library fdt found: NO 00:18:25.820 Library execinfo found: NO 00:18:25.820 Has header "execinfo.h" : YES 00:18:25.820 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:18:25.820 Run-time dependency libarchive found: NO (tried pkgconfig) 00:18:25.820 Run-time dependency libbsd found: NO (tried pkgconfig) 00:18:25.820 Run-time dependency jansson found: NO (tried pkgconfig) 00:18:25.820 Run-time dependency openssl found: YES 3.1.1 00:18:25.820 Run-time dependency libpcap found: YES 1.10.4 00:18:25.820 Has header "pcap.h" with dependency libpcap: YES 00:18:25.820 Compiler for C supports arguments -Wcast-qual: YES 00:18:25.820 Compiler for C supports arguments -Wdeprecated: YES 00:18:25.820 Compiler for C supports arguments -Wformat: YES 00:18:25.820 Compiler for C supports arguments -Wformat-nonliteral: NO 00:18:25.820 Compiler for C supports arguments -Wformat-security: NO 00:18:25.820 Compiler for C supports arguments -Wmissing-declarations: YES 00:18:25.820 Compiler for C supports arguments -Wmissing-prototypes: YES 00:18:25.820 Compiler for C supports arguments -Wnested-externs: YES 00:18:25.820 Compiler for C supports arguments -Wold-style-definition: YES 00:18:25.820 Compiler for C supports arguments -Wpointer-arith: YES 00:18:25.820 Compiler for C supports arguments -Wsign-compare: YES 00:18:25.820 Compiler for C supports arguments -Wstrict-prototypes: YES 00:18:25.820 Compiler for C supports arguments -Wundef: YES 00:18:25.820 Compiler for C supports arguments -Wwrite-strings: YES 00:18:25.820 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:18:25.820 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:18:25.820 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:18:25.820 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:18:25.820 Program objdump found: YES (/usr/bin/objdump) 00:18:25.820 Compiler for C supports arguments -mavx512f: YES 00:18:25.820 Checking if "AVX512 checking" compiles: YES 00:18:25.821 Fetching value of define "__SSE4_2__" : 1 00:18:25.821 Fetching value of define "__AES__" : 1 00:18:25.821 Fetching value of define "__AVX__" : 1 00:18:25.821 Fetching value of define "__AVX2__" : 1 00:18:25.821 Fetching value of define "__AVX512BW__" : 1 00:18:25.821 Fetching value of define "__AVX512CD__" : 1 00:18:25.821 Fetching value of define "__AVX512DQ__" : 1 00:18:25.821 Fetching value of define "__AVX512F__" : 1 00:18:25.821 Fetching value of define "__AVX512VL__" : 1 00:18:25.821 Fetching value of define "__PCLMUL__" : 1 00:18:25.821 Fetching value of define "__RDRND__" : 1 00:18:25.821 Fetching value of define "__RDSEED__" : 1 00:18:25.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:18:25.821 Fetching value of define "__znver1__" : (undefined) 00:18:25.821 Fetching value of define "__znver2__" : (undefined) 00:18:25.821 Fetching value of define "__znver3__" : (undefined) 00:18:25.821 Fetching value of define "__znver4__" : (undefined) 00:18:25.821 Library asan found: YES 00:18:25.821 Compiler for C supports arguments -Wno-format-truncation: YES 00:18:25.821 Message: lib/log: Defining dependency "log" 00:18:25.821 Message: lib/kvargs: Defining dependency "kvargs" 00:18:25.821 Message: lib/telemetry: Defining dependency "telemetry" 00:18:25.821 Library rt found: YES 00:18:25.821 Checking for function "getentropy" : NO 00:18:25.821 Message: lib/eal: Defining dependency "eal" 00:18:25.821 Message: lib/ring: Defining dependency "ring" 00:18:25.821 Message: lib/rcu: Defining dependency "rcu" 00:18:25.821 Message: lib/mempool: Defining dependency "mempool" 00:18:25.821 Message: lib/mbuf: Defining dependency "mbuf" 00:18:25.821 Fetching value of define "__PCLMUL__" : 1 (cached) 00:18:25.821 Fetching value of define "__AVX512F__" : 1 (cached) 00:18:25.821 Fetching value of define "__AVX512BW__" : 1 (cached) 00:18:25.821 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:18:25.821 Fetching value of define "__AVX512VL__" : 1 (cached) 00:18:25.821 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:18:25.821 Compiler for C supports arguments -mpclmul: YES 00:18:25.821 Compiler for C supports arguments -maes: YES 00:18:25.821 Compiler for C supports arguments -mavx512f: YES (cached) 00:18:25.821 Compiler for C supports arguments -mavx512bw: YES 00:18:25.821 Compiler for C supports arguments -mavx512dq: YES 00:18:25.821 Compiler for C supports arguments -mavx512vl: YES 00:18:25.821 Compiler for C supports arguments -mvpclmulqdq: YES 00:18:25.821 Compiler for C supports arguments -mavx2: YES 00:18:25.821 Compiler for C supports arguments -mavx: YES 00:18:25.821 Message: lib/net: Defining dependency "net" 00:18:25.821 Message: lib/meter: Defining dependency "meter" 00:18:25.821 Message: lib/ethdev: Defining dependency "ethdev" 00:18:25.821 Message: lib/pci: Defining dependency "pci" 00:18:25.821 Message: lib/cmdline: Defining dependency "cmdline" 00:18:25.821 Message: lib/hash: Defining dependency "hash" 00:18:25.821 Message: lib/timer: Defining dependency "timer" 00:18:25.821 Message: lib/compressdev: Defining dependency "compressdev" 00:18:25.821 Message: lib/cryptodev: Defining dependency "cryptodev" 00:18:25.821 Message: lib/dmadev: Defining dependency "dmadev" 00:18:25.821 Compiler for C supports arguments -Wno-cast-qual: YES 00:18:25.821 Message: lib/power: Defining dependency "power" 00:18:25.821 Message: lib/reorder: Defining dependency "reorder" 00:18:25.821 Message: lib/security: Defining dependency "security" 00:18:25.821 Has header "linux/userfaultfd.h" : YES 00:18:25.821 Has header "linux/vduse.h" : YES 00:18:25.821 Message: lib/vhost: Defining dependency "vhost" 00:18:25.821 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:18:25.821 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:18:25.821 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:18:25.821 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:18:25.821 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:18:25.821 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:18:25.821 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:18:25.821 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:18:25.821 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:18:25.821 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:18:25.821 Program doxygen found: YES (/usr/local/bin/doxygen) 00:18:25.821 Configuring doxy-api-html.conf using configuration 00:18:25.821 Configuring doxy-api-man.conf using configuration 00:18:25.821 Program mandb found: YES (/usr/bin/mandb) 00:18:25.821 Program sphinx-build found: NO 00:18:25.821 Configuring rte_build_config.h using configuration 00:18:25.821 Message: 00:18:25.821 ================= 00:18:25.821 Applications Enabled 00:18:25.821 ================= 00:18:25.821 00:18:25.821 apps: 00:18:25.821 00:18:25.821 00:18:25.821 Message: 00:18:25.821 ================= 00:18:25.821 Libraries Enabled 00:18:25.821 ================= 00:18:25.821 00:18:25.821 libs: 00:18:25.821 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:18:25.821 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:18:25.821 cryptodev, dmadev, power, reorder, security, vhost, 00:18:25.821 00:18:25.821 Message: 00:18:25.821 =============== 00:18:25.821 Drivers Enabled 00:18:25.821 =============== 00:18:25.821 00:18:25.821 common: 00:18:25.821 00:18:25.821 bus: 00:18:25.821 pci, vdev, 00:18:25.821 mempool: 00:18:25.821 ring, 00:18:25.821 dma: 00:18:25.821 00:18:25.821 net: 00:18:25.821 00:18:25.821 crypto: 00:18:25.821 00:18:25.821 compress: 00:18:25.821 00:18:25.821 vdpa: 00:18:25.821 00:18:25.821 00:18:25.821 Message: 00:18:25.821 ================= 00:18:25.821 Content Skipped 00:18:25.821 ================= 00:18:25.821 00:18:25.821 apps: 00:18:25.821 dumpcap: explicitly disabled via build config 00:18:25.821 graph: explicitly disabled via build config 00:18:25.821 pdump: explicitly disabled via build config 00:18:25.821 proc-info: explicitly disabled via build config 00:18:25.821 test-acl: explicitly disabled via build config 00:18:25.821 test-bbdev: explicitly disabled via build config 00:18:25.821 test-cmdline: explicitly disabled via build config 00:18:25.821 test-compress-perf: explicitly disabled via build config 00:18:25.821 test-crypto-perf: explicitly disabled via build config 00:18:25.821 test-dma-perf: explicitly disabled via build config 00:18:25.821 test-eventdev: explicitly disabled via build config 00:18:25.821 test-fib: explicitly disabled via build config 00:18:25.821 test-flow-perf: explicitly disabled via build config 00:18:25.821 test-gpudev: explicitly disabled via build config 00:18:25.821 test-mldev: explicitly disabled via build config 00:18:25.821 test-pipeline: explicitly disabled via build config 00:18:25.821 test-pmd: explicitly disabled via build config 00:18:25.821 test-regex: explicitly disabled via build config 00:18:25.821 test-sad: explicitly disabled via build config 00:18:25.821 test-security-perf: explicitly disabled via build config 00:18:25.821 00:18:25.821 libs: 00:18:25.821 argparse: explicitly disabled via build config 00:18:25.821 metrics: explicitly disabled via build config 00:18:25.821 acl: explicitly disabled via build config 00:18:25.821 bbdev: explicitly disabled via build config 00:18:25.821 bitratestats: explicitly disabled via build config 00:18:25.821 bpf: explicitly disabled via build config 00:18:25.821 cfgfile: explicitly disabled via build config 00:18:25.821 distributor: explicitly disabled via build config 00:18:25.821 efd: explicitly disabled via build config 00:18:25.822 eventdev: explicitly disabled via build config 00:18:25.822 dispatcher: explicitly disabled via build config 00:18:25.822 gpudev: explicitly disabled via build config 00:18:25.822 gro: explicitly disabled via build config 00:18:25.822 gso: explicitly disabled via build config 00:18:25.822 ip_frag: explicitly disabled via build config 00:18:25.822 jobstats: explicitly disabled via build config 00:18:25.822 latencystats: explicitly disabled via build config 00:18:25.822 lpm: explicitly disabled via build config 00:18:25.822 member: explicitly disabled via build config 00:18:25.822 pcapng: explicitly disabled via build config 00:18:25.822 rawdev: explicitly disabled via build config 00:18:25.822 regexdev: explicitly disabled via build config 00:18:25.822 mldev: explicitly disabled via build config 00:18:25.822 rib: explicitly disabled via build config 00:18:25.822 sched: explicitly disabled via build config 00:18:25.822 stack: explicitly disabled via build config 00:18:25.822 ipsec: explicitly disabled via build config 00:18:25.822 pdcp: explicitly disabled via build config 00:18:25.822 fib: explicitly disabled via build config 00:18:25.822 port: explicitly disabled via build config 00:18:25.822 pdump: explicitly disabled via build config 00:18:25.822 table: explicitly disabled via build config 00:18:25.822 pipeline: explicitly disabled via build config 00:18:25.822 graph: explicitly disabled via build config 00:18:25.822 node: explicitly disabled via build config 00:18:25.822 00:18:25.822 drivers: 00:18:25.822 common/cpt: not in enabled drivers build config 00:18:25.822 common/dpaax: not in enabled drivers build config 00:18:25.822 common/iavf: not in enabled drivers build config 00:18:25.822 common/idpf: not in enabled drivers build config 00:18:25.822 common/ionic: not in enabled drivers build config 00:18:25.822 common/mvep: not in enabled drivers build config 00:18:25.822 common/octeontx: not in enabled drivers build config 00:18:25.822 bus/auxiliary: not in enabled drivers build config 00:18:25.822 bus/cdx: not in enabled drivers build config 00:18:25.822 bus/dpaa: not in enabled drivers build config 00:18:25.822 bus/fslmc: not in enabled drivers build config 00:18:25.822 bus/ifpga: not in enabled drivers build config 00:18:25.822 bus/platform: not in enabled drivers build config 00:18:25.822 bus/uacce: not in enabled drivers build config 00:18:25.822 bus/vmbus: not in enabled drivers build config 00:18:25.822 common/cnxk: not in enabled drivers build config 00:18:25.822 common/mlx5: not in enabled drivers build config 00:18:25.822 common/nfp: not in enabled drivers build config 00:18:25.822 common/nitrox: not in enabled drivers build config 00:18:25.822 common/qat: not in enabled drivers build config 00:18:25.822 common/sfc_efx: not in enabled drivers build config 00:18:25.822 mempool/bucket: not in enabled drivers build config 00:18:25.822 mempool/cnxk: not in enabled drivers build config 00:18:25.822 mempool/dpaa: not in enabled drivers build config 00:18:25.822 mempool/dpaa2: not in enabled drivers build config 00:18:25.822 mempool/octeontx: not in enabled drivers build config 00:18:25.822 mempool/stack: not in enabled drivers build config 00:18:25.822 dma/cnxk: not in enabled drivers build config 00:18:25.822 dma/dpaa: not in enabled drivers build config 00:18:25.822 dma/dpaa2: not in enabled drivers build config 00:18:25.822 dma/hisilicon: not in enabled drivers build config 00:18:25.822 dma/idxd: not in enabled drivers build config 00:18:25.822 dma/ioat: not in enabled drivers build config 00:18:25.822 dma/skeleton: not in enabled drivers build config 00:18:25.822 net/af_packet: not in enabled drivers build config 00:18:25.822 net/af_xdp: not in enabled drivers build config 00:18:25.822 net/ark: not in enabled drivers build config 00:18:25.822 net/atlantic: not in enabled drivers build config 00:18:25.822 net/avp: not in enabled drivers build config 00:18:25.822 net/axgbe: not in enabled drivers build config 00:18:25.822 net/bnx2x: not in enabled drivers build config 00:18:25.822 net/bnxt: not in enabled drivers build config 00:18:25.822 net/bonding: not in enabled drivers build config 00:18:25.822 net/cnxk: not in enabled drivers build config 00:18:25.822 net/cpfl: not in enabled drivers build config 00:18:25.822 net/cxgbe: not in enabled drivers build config 00:18:25.822 net/dpaa: not in enabled drivers build config 00:18:25.822 net/dpaa2: not in enabled drivers build config 00:18:25.822 net/e1000: not in enabled drivers build config 00:18:25.822 net/ena: not in enabled drivers build config 00:18:25.822 net/enetc: not in enabled drivers build config 00:18:25.822 net/enetfec: not in enabled drivers build config 00:18:25.822 net/enic: not in enabled drivers build config 00:18:25.822 net/failsafe: not in enabled drivers build config 00:18:25.822 net/fm10k: not in enabled drivers build config 00:18:25.822 net/gve: not in enabled drivers build config 00:18:25.822 net/hinic: not in enabled drivers build config 00:18:25.822 net/hns3: not in enabled drivers build config 00:18:25.822 net/i40e: not in enabled drivers build config 00:18:25.822 net/iavf: not in enabled drivers build config 00:18:25.822 net/ice: not in enabled drivers build config 00:18:25.822 net/idpf: not in enabled drivers build config 00:18:25.822 net/igc: not in enabled drivers build config 00:18:25.822 net/ionic: not in enabled drivers build config 00:18:25.822 net/ipn3ke: not in enabled drivers build config 00:18:25.822 net/ixgbe: not in enabled drivers build config 00:18:25.822 net/mana: not in enabled drivers build config 00:18:25.822 net/memif: not in enabled drivers build config 00:18:25.822 net/mlx4: not in enabled drivers build config 00:18:25.822 net/mlx5: not in enabled drivers build config 00:18:25.822 net/mvneta: not in enabled drivers build config 00:18:25.822 net/mvpp2: not in enabled drivers build config 00:18:25.822 net/netvsc: not in enabled drivers build config 00:18:25.822 net/nfb: not in enabled drivers build config 00:18:25.822 net/nfp: not in enabled drivers build config 00:18:25.822 net/ngbe: not in enabled drivers build config 00:18:25.822 net/null: not in enabled drivers build config 00:18:25.822 net/octeontx: not in enabled drivers build config 00:18:25.822 net/octeon_ep: not in enabled drivers build config 00:18:25.822 net/pcap: not in enabled drivers build config 00:18:25.822 net/pfe: not in enabled drivers build config 00:18:25.822 net/qede: not in enabled drivers build config 00:18:25.822 net/ring: not in enabled drivers build config 00:18:25.822 net/sfc: not in enabled drivers build config 00:18:25.822 net/softnic: not in enabled drivers build config 00:18:25.822 net/tap: not in enabled drivers build config 00:18:25.822 net/thunderx: not in enabled drivers build config 00:18:25.822 net/txgbe: not in enabled drivers build config 00:18:25.822 net/vdev_netvsc: not in enabled drivers build config 00:18:25.822 net/vhost: not in enabled drivers build config 00:18:25.822 net/virtio: not in enabled drivers build config 00:18:25.822 net/vmxnet3: not in enabled drivers build config 00:18:25.822 raw/*: missing internal dependency, "rawdev" 00:18:25.822 crypto/armv8: not in enabled drivers build config 00:18:25.822 crypto/bcmfs: not in enabled drivers build config 00:18:25.822 crypto/caam_jr: not in enabled drivers build config 00:18:25.822 crypto/ccp: not in enabled drivers build config 00:18:25.822 crypto/cnxk: not in enabled drivers build config 00:18:25.822 crypto/dpaa_sec: not in enabled drivers build config 00:18:25.822 crypto/dpaa2_sec: not in enabled drivers build config 00:18:25.822 crypto/ipsec_mb: not in enabled drivers build config 00:18:25.822 crypto/mlx5: not in enabled drivers build config 00:18:25.822 crypto/mvsam: not in enabled drivers build config 00:18:25.822 crypto/nitrox: not in enabled drivers build config 00:18:25.822 crypto/null: not in enabled drivers build config 00:18:25.823 crypto/octeontx: not in enabled drivers build config 00:18:25.823 crypto/openssl: not in enabled drivers build config 00:18:25.823 crypto/scheduler: not in enabled drivers build config 00:18:25.823 crypto/uadk: not in enabled drivers build config 00:18:25.823 crypto/virtio: not in enabled drivers build config 00:18:25.823 compress/isal: not in enabled drivers build config 00:18:25.823 compress/mlx5: not in enabled drivers build config 00:18:25.823 compress/nitrox: not in enabled drivers build config 00:18:25.823 compress/octeontx: not in enabled drivers build config 00:18:25.823 compress/zlib: not in enabled drivers build config 00:18:25.823 regex/*: missing internal dependency, "regexdev" 00:18:25.823 ml/*: missing internal dependency, "mldev" 00:18:25.823 vdpa/ifc: not in enabled drivers build config 00:18:25.823 vdpa/mlx5: not in enabled drivers build config 00:18:25.823 vdpa/nfp: not in enabled drivers build config 00:18:25.823 vdpa/sfc: not in enabled drivers build config 00:18:25.823 event/*: missing internal dependency, "eventdev" 00:18:25.823 baseband/*: missing internal dependency, "bbdev" 00:18:25.823 gpu/*: missing internal dependency, "gpudev" 00:18:25.823 00:18:25.823 00:18:25.823 Build targets in project: 85 00:18:25.823 00:18:25.823 DPDK 24.03.0 00:18:25.823 00:18:25.823 User defined options 00:18:25.823 buildtype : debug 00:18:25.823 default_library : shared 00:18:25.823 libdir : lib 00:18:25.823 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:18:25.823 b_sanitize : address 00:18:25.823 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:18:25.823 c_link_args : 00:18:25.823 cpu_instruction_set: native 00:18:25.823 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:18:25.823 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:18:25.823 enable_docs : false 00:18:25.823 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:18:25.823 enable_kmods : false 00:18:25.823 max_lcores : 128 00:18:25.823 tests : false 00:18:25.823 00:18:25.823 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:18:26.407 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:18:26.407 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:18:26.407 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:18:26.407 [3/268] Linking static target lib/librte_log.a 00:18:26.407 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:18:26.407 [5/268] Linking static target lib/librte_kvargs.a 00:18:26.407 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:18:26.974 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:18:26.974 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:18:26.974 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:18:26.974 [10/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:18:26.974 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:18:26.974 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:18:26.974 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:18:26.974 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:18:26.974 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:18:27.236 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:18:27.236 [17/268] Linking static target lib/librte_telemetry.a 00:18:27.236 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:18:27.496 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:18:27.496 [20/268] Linking target lib/librte_log.so.24.1 00:18:27.756 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:18:27.756 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:18:27.756 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:18:27.756 [24/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:18:27.757 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:18:27.757 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:18:27.757 [27/268] Linking target lib/librte_kvargs.so.24.1 00:18:27.757 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:18:28.016 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:18:28.016 [30/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:18:28.016 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:18:28.016 [32/268] Linking target lib/librte_telemetry.so.24.1 00:18:28.016 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:18:28.016 [34/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:18:28.275 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:18:28.275 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:18:28.275 [37/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:18:28.275 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:18:28.534 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:18:28.534 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:18:28.534 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:18:28.534 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:18:28.534 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:18:28.534 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:18:28.792 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:18:28.792 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:18:28.792 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:18:29.050 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:18:29.050 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:18:29.050 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:18:29.050 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:18:29.313 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:18:29.313 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:18:29.313 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:18:29.581 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:18:29.581 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:18:29.581 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:18:29.581 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:18:29.581 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:18:29.581 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:18:29.839 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:18:29.839 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:18:29.839 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:18:29.839 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:18:29.839 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:18:30.098 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:18:30.098 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:18:30.357 [68/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:18:30.357 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:18:30.357 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:18:30.616 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:18:30.616 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:18:30.616 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:18:30.616 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:18:30.616 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:18:30.616 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:18:30.875 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:18:30.875 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:18:30.875 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:18:31.134 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:18:31.134 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:18:31.134 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:18:31.134 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:18:31.134 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:18:31.134 [85/268] Linking static target lib/librte_ring.a 00:18:31.134 [86/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:18:31.393 [87/268] Linking static target lib/librte_eal.a 00:18:31.393 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:18:31.393 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:18:31.652 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:18:31.652 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:18:31.652 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:18:31.652 [93/268] Linking static target lib/librte_rcu.a 00:18:31.652 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:18:31.652 [95/268] Linking static target lib/librte_mempool.a 00:18:31.652 [96/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:18:31.652 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:18:31.942 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:18:31.942 [99/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:18:31.942 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:18:31.942 [101/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:18:31.942 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:18:32.229 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.229 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:18:32.229 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:18:32.229 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:18:32.229 [107/268] Linking static target lib/librte_net.a 00:18:32.229 [108/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:18:32.229 [109/268] Linking static target lib/librte_meter.a 00:18:32.488 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:18:32.746 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:18:32.746 [112/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.746 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.746 [114/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:18:32.746 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:18:32.746 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:18:33.313 [117/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:18:33.313 [118/268] Linking static target lib/librte_mbuf.a 00:18:33.313 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:18:33.313 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:18:33.313 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:18:33.313 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:18:33.881 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:18:33.881 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:18:33.881 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:18:34.139 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:18:34.139 [127/268] Linking static target lib/librte_pci.a 00:18:34.139 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:18:34.139 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:18:34.139 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:18:34.139 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:18:34.139 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:18:34.398 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:18:34.398 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:18:34.398 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:18:34.398 [136/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:18:34.398 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:18:34.398 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:18:34.398 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:18:34.398 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:18:34.398 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:34.398 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:18:34.398 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:18:34.657 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:18:34.657 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:18:34.657 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:18:34.657 [147/268] Linking static target lib/librte_cmdline.a 00:18:34.915 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:18:34.915 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:18:34.915 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:18:34.915 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:18:35.172 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:18:35.430 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:18:35.430 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:18:35.430 [155/268] Linking static target lib/librte_timer.a 00:18:35.688 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:18:35.688 [157/268] Linking static target lib/librte_compressdev.a 00:18:35.688 [158/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:18:35.946 [159/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:18:35.946 [160/268] Linking static target lib/librte_hash.a 00:18:35.946 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:18:35.946 [162/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:18:35.946 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:18:36.202 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:18:36.202 [165/268] Linking static target lib/librte_dmadev.a 00:18:36.202 [166/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.202 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:18:36.202 [168/268] Linking static target lib/librte_ethdev.a 00:18:36.202 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:18:36.202 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.458 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:18:36.458 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:18:36.458 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:18:36.715 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.973 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:18:36.973 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:18:36.973 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:18:36.973 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:18:36.973 [179/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:36.973 [180/268] Linking static target lib/librte_cryptodev.a 00:18:36.973 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:18:36.973 [182/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:18:37.229 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:18:37.229 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:18:37.487 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:18:37.487 [186/268] Linking static target lib/librte_power.a 00:18:37.745 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:18:37.745 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:18:37.745 [189/268] Linking static target lib/librte_reorder.a 00:18:37.745 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:18:37.745 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:18:38.005 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:18:38.005 [193/268] Linking static target lib/librte_security.a 00:18:38.278 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:18:38.278 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:18:38.537 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:18:38.795 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:18:38.795 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:18:38.795 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:18:38.795 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:18:39.053 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:18:39.053 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:18:39.311 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:18:39.311 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:18:39.311 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:18:39.569 [206/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:39.569 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:18:39.569 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:18:39.569 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:18:39.569 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:18:39.569 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:18:39.827 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:18:39.827 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:18:39.827 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:39.827 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:18:39.827 [216/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:39.827 [217/268] Linking static target drivers/librte_bus_vdev.a 00:18:39.827 [218/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:18:39.827 [219/268] Linking static target drivers/librte_bus_pci.a 00:18:40.085 [220/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:18:40.085 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:18:40.085 [222/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:40.343 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:18:40.343 [224/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:40.343 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:18:40.343 [226/268] Linking static target drivers/librte_mempool_ring.a 00:18:40.343 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:18:41.312 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:18:42.268 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:18:42.268 [230/268] Linking target lib/librte_eal.so.24.1 00:18:42.526 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:18:42.526 [232/268] Linking target drivers/librte_bus_vdev.so.24.1 00:18:42.526 [233/268] Linking target lib/librte_meter.so.24.1 00:18:42.526 [234/268] Linking target lib/librte_ring.so.24.1 00:18:42.526 [235/268] Linking target lib/librte_pci.so.24.1 00:18:42.526 [236/268] Linking target lib/librte_timer.so.24.1 00:18:42.526 [237/268] Linking target lib/librte_dmadev.so.24.1 00:18:42.784 [238/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:18:42.784 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:18:42.784 [240/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:18:42.784 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:18:42.784 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:18:42.784 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:18:42.784 [244/268] Linking target lib/librte_rcu.so.24.1 00:18:42.784 [245/268] Linking target lib/librte_mempool.so.24.1 00:18:43.043 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:18:43.043 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:18:43.043 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:18:43.043 [249/268] Linking target lib/librte_mbuf.so.24.1 00:18:43.043 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:18:43.301 [251/268] Linking target lib/librte_compressdev.so.24.1 00:18:43.301 [252/268] Linking target lib/librte_reorder.so.24.1 00:18:43.301 [253/268] Linking target lib/librte_net.so.24.1 00:18:43.301 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:18:43.301 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:18:43.301 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:18:43.301 [257/268] Linking target lib/librte_cmdline.so.24.1 00:18:43.301 [258/268] Linking target lib/librte_security.so.24.1 00:18:43.301 [259/268] Linking target lib/librte_hash.so.24.1 00:18:43.560 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:18:44.953 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:18:44.953 [262/268] Linking target lib/librte_ethdev.so.24.1 00:18:45.213 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:18:45.213 [264/268] Linking target lib/librte_power.so.24.1 00:18:47.118 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:18:47.118 [266/268] Linking static target lib/librte_vhost.a 00:18:49.652 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:18:49.652 [268/268] Linking target lib/librte_vhost.so.24.1 00:18:49.652 INFO: autodetecting backend as ninja 00:18:49.652 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:19:07.764 CC lib/ut/ut.o 00:19:07.764 CC lib/log/log.o 00:19:07.764 CC lib/log/log_flags.o 00:19:07.764 CC lib/log/log_deprecated.o 00:19:07.764 CC lib/ut_mock/mock.o 00:19:08.037 LIB libspdk_ut.a 00:19:08.037 SO libspdk_ut.so.2.0 00:19:08.038 LIB libspdk_log.a 00:19:08.038 LIB libspdk_ut_mock.a 00:19:08.038 SYMLINK libspdk_ut.so 00:19:08.038 SO libspdk_ut_mock.so.6.0 00:19:08.038 SO libspdk_log.so.7.1 00:19:08.038 SYMLINK libspdk_ut_mock.so 00:19:08.038 SYMLINK libspdk_log.so 00:19:08.313 CC lib/dma/dma.o 00:19:08.591 CXX lib/trace_parser/trace.o 00:19:08.591 CC lib/util/bit_array.o 00:19:08.591 CC lib/ioat/ioat.o 00:19:08.591 CC lib/util/base64.o 00:19:08.591 CC lib/util/crc32.o 00:19:08.591 CC lib/util/crc32c.o 00:19:08.591 CC lib/util/cpuset.o 00:19:08.591 CC lib/util/crc16.o 00:19:08.591 CC lib/vfio_user/host/vfio_user_pci.o 00:19:08.591 CC lib/vfio_user/host/vfio_user.o 00:19:08.591 CC lib/util/crc32_ieee.o 00:19:08.591 LIB libspdk_dma.a 00:19:08.591 CC lib/util/crc64.o 00:19:08.591 CC lib/util/dif.o 00:19:08.591 SO libspdk_dma.so.5.0 00:19:08.591 CC lib/util/fd.o 00:19:08.871 CC lib/util/fd_group.o 00:19:08.871 SYMLINK libspdk_dma.so 00:19:08.871 CC lib/util/file.o 00:19:08.871 CC lib/util/hexlify.o 00:19:08.871 CC lib/util/iov.o 00:19:08.871 LIB libspdk_ioat.a 00:19:08.871 SO libspdk_ioat.so.7.0 00:19:08.871 CC lib/util/math.o 00:19:08.871 CC lib/util/net.o 00:19:08.871 LIB libspdk_vfio_user.a 00:19:08.871 SYMLINK libspdk_ioat.so 00:19:08.871 CC lib/util/pipe.o 00:19:08.871 SO libspdk_vfio_user.so.5.0 00:19:08.871 CC lib/util/strerror_tls.o 00:19:08.871 CC lib/util/string.o 00:19:08.871 SYMLINK libspdk_vfio_user.so 00:19:08.871 CC lib/util/uuid.o 00:19:09.135 CC lib/util/xor.o 00:19:09.135 CC lib/util/zipf.o 00:19:09.135 CC lib/util/md5.o 00:19:09.394 LIB libspdk_util.a 00:19:09.653 SO libspdk_util.so.10.1 00:19:09.653 LIB libspdk_trace_parser.a 00:19:09.653 SO libspdk_trace_parser.so.6.0 00:19:09.920 SYMLINK libspdk_util.so 00:19:09.920 SYMLINK libspdk_trace_parser.so 00:19:10.179 CC lib/env_dpdk/env.o 00:19:10.179 CC lib/env_dpdk/memory.o 00:19:10.179 CC lib/rdma_utils/rdma_utils.o 00:19:10.179 CC lib/env_dpdk/init.o 00:19:10.179 CC lib/env_dpdk/pci.o 00:19:10.179 CC lib/vmd/vmd.o 00:19:10.179 CC lib/env_dpdk/threads.o 00:19:10.179 CC lib/conf/conf.o 00:19:10.179 CC lib/idxd/idxd.o 00:19:10.179 CC lib/json/json_parse.o 00:19:10.179 CC lib/json/json_util.o 00:19:10.438 LIB libspdk_conf.a 00:19:10.438 SO libspdk_conf.so.6.0 00:19:10.438 LIB libspdk_rdma_utils.a 00:19:10.438 CC lib/json/json_write.o 00:19:10.438 SO libspdk_rdma_utils.so.1.0 00:19:10.438 SYMLINK libspdk_conf.so 00:19:10.438 CC lib/vmd/led.o 00:19:10.438 SYMLINK libspdk_rdma_utils.so 00:19:10.438 CC lib/idxd/idxd_user.o 00:19:10.438 CC lib/idxd/idxd_kernel.o 00:19:10.696 CC lib/env_dpdk/pci_ioat.o 00:19:10.696 CC lib/env_dpdk/pci_virtio.o 00:19:10.696 CC lib/rdma_provider/common.o 00:19:10.696 CC lib/rdma_provider/rdma_provider_verbs.o 00:19:10.696 CC lib/env_dpdk/pci_vmd.o 00:19:10.696 CC lib/env_dpdk/pci_idxd.o 00:19:10.696 LIB libspdk_json.a 00:19:10.955 SO libspdk_json.so.6.0 00:19:10.955 CC lib/env_dpdk/pci_event.o 00:19:10.955 SYMLINK libspdk_json.so 00:19:10.955 CC lib/env_dpdk/sigbus_handler.o 00:19:10.955 CC lib/env_dpdk/pci_dpdk.o 00:19:10.955 LIB libspdk_idxd.a 00:19:10.955 CC lib/env_dpdk/pci_dpdk_2207.o 00:19:10.955 LIB libspdk_vmd.a 00:19:10.955 CC lib/env_dpdk/pci_dpdk_2211.o 00:19:10.955 SO libspdk_idxd.so.12.1 00:19:10.955 SO libspdk_vmd.so.6.0 00:19:10.955 LIB libspdk_rdma_provider.a 00:19:10.955 SO libspdk_rdma_provider.so.7.0 00:19:10.955 SYMLINK libspdk_vmd.so 00:19:10.955 SYMLINK libspdk_idxd.so 00:19:11.215 SYMLINK libspdk_rdma_provider.so 00:19:11.215 CC lib/jsonrpc/jsonrpc_server.o 00:19:11.215 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:19:11.215 CC lib/jsonrpc/jsonrpc_client.o 00:19:11.215 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:19:11.476 LIB libspdk_jsonrpc.a 00:19:11.476 SO libspdk_jsonrpc.so.6.0 00:19:11.735 SYMLINK libspdk_jsonrpc.so 00:19:11.735 CC lib/rpc/rpc.o 00:19:11.994 LIB libspdk_rpc.a 00:19:12.252 SO libspdk_rpc.so.6.0 00:19:12.252 SYMLINK libspdk_rpc.so 00:19:12.252 LIB libspdk_env_dpdk.a 00:19:12.252 SO libspdk_env_dpdk.so.15.1 00:19:12.511 CC lib/trace/trace_flags.o 00:19:12.511 CC lib/notify/notify.o 00:19:12.511 CC lib/trace/trace_rpc.o 00:19:12.511 CC lib/trace/trace.o 00:19:12.511 CC lib/notify/notify_rpc.o 00:19:12.511 CC lib/keyring/keyring.o 00:19:12.511 CC lib/keyring/keyring_rpc.o 00:19:12.511 SYMLINK libspdk_env_dpdk.so 00:19:12.511 LIB libspdk_notify.a 00:19:12.511 SO libspdk_notify.so.6.0 00:19:12.770 SYMLINK libspdk_notify.so 00:19:12.770 LIB libspdk_trace.a 00:19:12.770 LIB libspdk_keyring.a 00:19:12.770 SO libspdk_trace.so.11.0 00:19:12.770 SO libspdk_keyring.so.2.0 00:19:12.770 SYMLINK libspdk_trace.so 00:19:12.770 SYMLINK libspdk_keyring.so 00:19:13.066 CC lib/thread/thread.o 00:19:13.066 CC lib/thread/iobuf.o 00:19:13.066 CC lib/sock/sock.o 00:19:13.066 CC lib/sock/sock_rpc.o 00:19:13.661 LIB libspdk_sock.a 00:19:13.661 SO libspdk_sock.so.10.0 00:19:13.661 SYMLINK libspdk_sock.so 00:19:14.228 CC lib/nvme/nvme_ctrlr_cmd.o 00:19:14.228 CC lib/nvme/nvme_ns.o 00:19:14.228 CC lib/nvme/nvme_ctrlr.o 00:19:14.228 CC lib/nvme/nvme_fabric.o 00:19:14.228 CC lib/nvme/nvme_ns_cmd.o 00:19:14.228 CC lib/nvme/nvme_pcie.o 00:19:14.228 CC lib/nvme/nvme_pcie_common.o 00:19:14.228 CC lib/nvme/nvme_qpair.o 00:19:14.228 CC lib/nvme/nvme.o 00:19:14.795 CC lib/nvme/nvme_quirks.o 00:19:14.795 CC lib/nvme/nvme_transport.o 00:19:15.052 CC lib/nvme/nvme_discovery.o 00:19:15.052 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:19:15.052 LIB libspdk_thread.a 00:19:15.052 SO libspdk_thread.so.11.0 00:19:15.052 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:19:15.052 CC lib/nvme/nvme_tcp.o 00:19:15.052 SYMLINK libspdk_thread.so 00:19:15.052 CC lib/nvme/nvme_opal.o 00:19:15.052 CC lib/nvme/nvme_io_msg.o 00:19:15.311 CC lib/nvme/nvme_poll_group.o 00:19:15.311 CC lib/nvme/nvme_zns.o 00:19:15.889 CC lib/nvme/nvme_stubs.o 00:19:15.889 CC lib/accel/accel.o 00:19:15.889 CC lib/blob/blobstore.o 00:19:15.889 CC lib/nvme/nvme_auth.o 00:19:15.889 CC lib/accel/accel_rpc.o 00:19:15.889 CC lib/blob/request.o 00:19:16.148 CC lib/accel/accel_sw.o 00:19:16.148 CC lib/nvme/nvme_cuse.o 00:19:16.148 CC lib/blob/zeroes.o 00:19:16.404 CC lib/blob/blob_bs_dev.o 00:19:16.404 CC lib/nvme/nvme_rdma.o 00:19:16.663 CC lib/init/json_config.o 00:19:16.663 CC lib/virtio/virtio.o 00:19:16.663 CC lib/fsdev/fsdev.o 00:19:16.922 CC lib/init/subsystem.o 00:19:16.922 CC lib/init/subsystem_rpc.o 00:19:17.179 CC lib/virtio/virtio_vhost_user.o 00:19:17.179 CC lib/init/rpc.o 00:19:17.179 CC lib/fsdev/fsdev_io.o 00:19:17.179 CC lib/fsdev/fsdev_rpc.o 00:19:17.180 CC lib/virtio/virtio_vfio_user.o 00:19:17.180 CC lib/virtio/virtio_pci.o 00:19:17.180 LIB libspdk_init.a 00:19:17.437 SO libspdk_init.so.6.0 00:19:17.437 LIB libspdk_accel.a 00:19:17.437 SYMLINK libspdk_init.so 00:19:17.437 SO libspdk_accel.so.16.0 00:19:17.437 SYMLINK libspdk_accel.so 00:19:17.695 CC lib/event/app.o 00:19:17.695 CC lib/event/reactor.o 00:19:17.695 CC lib/event/log_rpc.o 00:19:17.695 CC lib/event/app_rpc.o 00:19:17.695 CC lib/event/scheduler_static.o 00:19:17.695 LIB libspdk_virtio.a 00:19:17.695 SO libspdk_virtio.so.7.0 00:19:17.695 CC lib/bdev/bdev.o 00:19:17.695 LIB libspdk_fsdev.a 00:19:17.695 SO libspdk_fsdev.so.2.0 00:19:17.695 SYMLINK libspdk_virtio.so 00:19:17.695 CC lib/bdev/bdev_rpc.o 00:19:17.695 CC lib/bdev/bdev_zone.o 00:19:17.695 CC lib/bdev/part.o 00:19:17.956 SYMLINK libspdk_fsdev.so 00:19:17.956 CC lib/bdev/scsi_nvme.o 00:19:17.956 LIB libspdk_nvme.a 00:19:18.216 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:19:18.216 LIB libspdk_event.a 00:19:18.216 SO libspdk_nvme.so.15.0 00:19:18.216 SO libspdk_event.so.14.0 00:19:18.475 SYMLINK libspdk_event.so 00:19:18.475 SYMLINK libspdk_nvme.so 00:19:19.073 LIB libspdk_fuse_dispatcher.a 00:19:19.073 SO libspdk_fuse_dispatcher.so.1.0 00:19:19.073 SYMLINK libspdk_fuse_dispatcher.so 00:19:20.009 LIB libspdk_blob.a 00:19:20.009 SO libspdk_blob.so.11.0 00:19:20.009 SYMLINK libspdk_blob.so 00:19:20.577 CC lib/lvol/lvol.o 00:19:20.577 CC lib/blobfs/blobfs.o 00:19:20.577 CC lib/blobfs/tree.o 00:19:21.146 LIB libspdk_bdev.a 00:19:21.146 SO libspdk_bdev.so.17.0 00:19:21.146 SYMLINK libspdk_bdev.so 00:19:21.409 CC lib/ublk/ublk.o 00:19:21.409 CC lib/ublk/ublk_rpc.o 00:19:21.409 CC lib/ftl/ftl_core.o 00:19:21.409 CC lib/ftl/ftl_init.o 00:19:21.409 CC lib/ftl/ftl_layout.o 00:19:21.409 CC lib/nbd/nbd.o 00:19:21.409 LIB libspdk_blobfs.a 00:19:21.409 CC lib/nvmf/ctrlr.o 00:19:21.409 CC lib/scsi/dev.o 00:19:21.673 SO libspdk_blobfs.so.10.0 00:19:21.673 LIB libspdk_lvol.a 00:19:21.673 SYMLINK libspdk_blobfs.so 00:19:21.673 CC lib/scsi/lun.o 00:19:21.673 SO libspdk_lvol.so.10.0 00:19:21.673 CC lib/ftl/ftl_debug.o 00:19:21.673 SYMLINK libspdk_lvol.so 00:19:21.673 CC lib/scsi/port.o 00:19:21.673 CC lib/nvmf/ctrlr_discovery.o 00:19:21.673 CC lib/nvmf/ctrlr_bdev.o 00:19:21.934 CC lib/ftl/ftl_io.o 00:19:21.934 CC lib/ftl/ftl_sb.o 00:19:21.934 CC lib/scsi/scsi.o 00:19:21.934 CC lib/nvmf/subsystem.o 00:19:21.934 CC lib/nbd/nbd_rpc.o 00:19:21.934 CC lib/scsi/scsi_bdev.o 00:19:22.193 CC lib/ftl/ftl_l2p.o 00:19:22.193 CC lib/nvmf/nvmf.o 00:19:22.193 CC lib/nvmf/nvmf_rpc.o 00:19:22.193 LIB libspdk_nbd.a 00:19:22.193 SO libspdk_nbd.so.7.0 00:19:22.193 LIB libspdk_ublk.a 00:19:22.193 SO libspdk_ublk.so.3.0 00:19:22.452 SYMLINK libspdk_nbd.so 00:19:22.452 CC lib/ftl/ftl_l2p_flat.o 00:19:22.452 CC lib/scsi/scsi_pr.o 00:19:22.452 SYMLINK libspdk_ublk.so 00:19:22.452 CC lib/ftl/ftl_nv_cache.o 00:19:22.452 CC lib/ftl/ftl_band.o 00:19:22.452 CC lib/scsi/scsi_rpc.o 00:19:22.710 CC lib/scsi/task.o 00:19:22.710 CC lib/nvmf/transport.o 00:19:22.710 CC lib/nvmf/tcp.o 00:19:22.710 CC lib/nvmf/stubs.o 00:19:22.968 LIB libspdk_scsi.a 00:19:22.968 CC lib/nvmf/mdns_server.o 00:19:22.968 SO libspdk_scsi.so.9.0 00:19:22.968 SYMLINK libspdk_scsi.so 00:19:22.968 CC lib/nvmf/rdma.o 00:19:23.227 CC lib/nvmf/auth.o 00:19:23.486 CC lib/ftl/ftl_band_ops.o 00:19:23.486 CC lib/iscsi/conn.o 00:19:23.486 CC lib/vhost/vhost.o 00:19:23.486 CC lib/vhost/vhost_rpc.o 00:19:23.486 CC lib/iscsi/init_grp.o 00:19:23.744 CC lib/iscsi/iscsi.o 00:19:23.744 CC lib/ftl/ftl_writer.o 00:19:24.003 CC lib/ftl/ftl_rq.o 00:19:24.003 CC lib/iscsi/param.o 00:19:24.003 CC lib/ftl/ftl_reloc.o 00:19:24.263 CC lib/iscsi/portal_grp.o 00:19:24.263 CC lib/ftl/ftl_l2p_cache.o 00:19:24.263 CC lib/iscsi/tgt_node.o 00:19:24.263 CC lib/vhost/vhost_scsi.o 00:19:24.523 CC lib/ftl/ftl_p2l.o 00:19:24.523 CC lib/ftl/ftl_p2l_log.o 00:19:24.783 CC lib/ftl/mngt/ftl_mngt.o 00:19:24.783 CC lib/vhost/vhost_blk.o 00:19:24.783 CC lib/vhost/rte_vhost_user.o 00:19:24.783 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:19:25.041 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:19:25.041 CC lib/iscsi/iscsi_subsystem.o 00:19:25.041 CC lib/ftl/mngt/ftl_mngt_startup.o 00:19:25.041 CC lib/iscsi/iscsi_rpc.o 00:19:25.041 CC lib/ftl/mngt/ftl_mngt_md.o 00:19:25.041 CC lib/iscsi/task.o 00:19:25.299 CC lib/ftl/mngt/ftl_mngt_misc.o 00:19:25.299 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_band.o 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:19:25.557 LIB libspdk_iscsi.a 00:19:25.557 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:19:25.557 CC lib/ftl/utils/ftl_conf.o 00:19:25.557 SO libspdk_iscsi.so.8.0 00:19:25.816 CC lib/ftl/utils/ftl_md.o 00:19:25.816 CC lib/ftl/utils/ftl_mempool.o 00:19:25.816 CC lib/ftl/utils/ftl_bitmap.o 00:19:25.816 CC lib/ftl/utils/ftl_property.o 00:19:25.816 SYMLINK libspdk_iscsi.so 00:19:25.816 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:19:25.816 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:19:25.816 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:19:25.816 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:19:26.076 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:19:26.076 LIB libspdk_vhost.a 00:19:26.076 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:19:26.076 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:19:26.076 SO libspdk_vhost.so.8.0 00:19:26.076 LIB libspdk_nvmf.a 00:19:26.076 CC lib/ftl/upgrade/ftl_sb_v3.o 00:19:26.076 CC lib/ftl/upgrade/ftl_sb_v5.o 00:19:26.076 CC lib/ftl/nvc/ftl_nvc_dev.o 00:19:26.076 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:19:26.336 SYMLINK libspdk_vhost.so 00:19:26.336 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:19:26.336 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:19:26.336 CC lib/ftl/base/ftl_base_dev.o 00:19:26.336 CC lib/ftl/base/ftl_base_bdev.o 00:19:26.336 SO libspdk_nvmf.so.20.0 00:19:26.336 CC lib/ftl/ftl_trace.o 00:19:26.595 SYMLINK libspdk_nvmf.so 00:19:26.595 LIB libspdk_ftl.a 00:19:26.854 SO libspdk_ftl.so.9.0 00:19:27.112 SYMLINK libspdk_ftl.so 00:19:27.695 CC module/env_dpdk/env_dpdk_rpc.o 00:19:27.695 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:19:27.695 CC module/scheduler/dynamic/scheduler_dynamic.o 00:19:27.695 CC module/scheduler/gscheduler/gscheduler.o 00:19:27.695 CC module/sock/posix/posix.o 00:19:27.695 CC module/accel/error/accel_error.o 00:19:27.695 CC module/accel/ioat/accel_ioat.o 00:19:27.695 CC module/keyring/file/keyring.o 00:19:27.695 CC module/fsdev/aio/fsdev_aio.o 00:19:27.695 CC module/blob/bdev/blob_bdev.o 00:19:27.695 LIB libspdk_env_dpdk_rpc.a 00:19:27.695 SO libspdk_env_dpdk_rpc.so.6.0 00:19:27.955 LIB libspdk_scheduler_gscheduler.a 00:19:27.955 SYMLINK libspdk_env_dpdk_rpc.so 00:19:27.955 CC module/keyring/file/keyring_rpc.o 00:19:27.955 CC module/fsdev/aio/fsdev_aio_rpc.o 00:19:27.955 LIB libspdk_scheduler_dpdk_governor.a 00:19:27.955 SO libspdk_scheduler_gscheduler.so.4.0 00:19:27.955 SO libspdk_scheduler_dpdk_governor.so.4.0 00:19:27.955 LIB libspdk_scheduler_dynamic.a 00:19:27.955 CC module/accel/ioat/accel_ioat_rpc.o 00:19:27.955 SO libspdk_scheduler_dynamic.so.4.0 00:19:27.955 SYMLINK libspdk_scheduler_gscheduler.so 00:19:27.955 CC module/fsdev/aio/linux_aio_mgr.o 00:19:27.955 CC module/accel/error/accel_error_rpc.o 00:19:27.955 SYMLINK libspdk_scheduler_dpdk_governor.so 00:19:27.955 SYMLINK libspdk_scheduler_dynamic.so 00:19:27.955 LIB libspdk_keyring_file.a 00:19:27.955 LIB libspdk_blob_bdev.a 00:19:27.955 SO libspdk_keyring_file.so.2.0 00:19:27.955 SO libspdk_blob_bdev.so.11.0 00:19:27.955 LIB libspdk_accel_ioat.a 00:19:27.955 SO libspdk_accel_ioat.so.6.0 00:19:28.213 SYMLINK libspdk_blob_bdev.so 00:19:28.213 LIB libspdk_accel_error.a 00:19:28.213 SYMLINK libspdk_keyring_file.so 00:19:28.213 SO libspdk_accel_error.so.2.0 00:19:28.213 SYMLINK libspdk_accel_ioat.so 00:19:28.213 CC module/accel/iaa/accel_iaa.o 00:19:28.213 CC module/accel/iaa/accel_iaa_rpc.o 00:19:28.213 CC module/accel/dsa/accel_dsa.o 00:19:28.213 CC module/accel/dsa/accel_dsa_rpc.o 00:19:28.213 SYMLINK libspdk_accel_error.so 00:19:28.213 CC module/keyring/linux/keyring.o 00:19:28.213 CC module/keyring/linux/keyring_rpc.o 00:19:28.472 LIB libspdk_keyring_linux.a 00:19:28.472 LIB libspdk_accel_iaa.a 00:19:28.472 CC module/bdev/delay/vbdev_delay.o 00:19:28.472 SO libspdk_keyring_linux.so.1.0 00:19:28.472 CC module/blobfs/bdev/blobfs_bdev.o 00:19:28.472 SO libspdk_accel_iaa.so.3.0 00:19:28.472 CC module/bdev/error/vbdev_error.o 00:19:28.472 SYMLINK libspdk_keyring_linux.so 00:19:28.472 CC module/bdev/delay/vbdev_delay_rpc.o 00:19:28.472 CC module/bdev/gpt/gpt.o 00:19:28.472 LIB libspdk_accel_dsa.a 00:19:28.472 SYMLINK libspdk_accel_iaa.so 00:19:28.472 CC module/bdev/lvol/vbdev_lvol.o 00:19:28.472 CC module/bdev/gpt/vbdev_gpt.o 00:19:28.472 LIB libspdk_fsdev_aio.a 00:19:28.472 SO libspdk_accel_dsa.so.5.0 00:19:28.472 SO libspdk_fsdev_aio.so.1.0 00:19:28.731 SYMLINK libspdk_accel_dsa.so 00:19:28.731 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:19:28.731 LIB libspdk_sock_posix.a 00:19:28.731 SYMLINK libspdk_fsdev_aio.so 00:19:28.731 SO libspdk_sock_posix.so.6.0 00:19:28.731 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:19:28.731 SYMLINK libspdk_sock_posix.so 00:19:28.731 CC module/bdev/malloc/bdev_malloc.o 00:19:28.731 LIB libspdk_blobfs_bdev.a 00:19:28.731 CC module/bdev/error/vbdev_error_rpc.o 00:19:28.731 CC module/bdev/null/bdev_null.o 00:19:28.731 SO libspdk_blobfs_bdev.so.6.0 00:19:28.731 LIB libspdk_bdev_gpt.a 00:19:28.731 LIB libspdk_bdev_delay.a 00:19:28.989 SO libspdk_bdev_gpt.so.6.0 00:19:28.989 CC module/bdev/nvme/bdev_nvme.o 00:19:28.989 SO libspdk_bdev_delay.so.6.0 00:19:28.989 SYMLINK libspdk_blobfs_bdev.so 00:19:28.989 CC module/bdev/passthru/vbdev_passthru.o 00:19:28.989 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:19:28.989 SYMLINK libspdk_bdev_gpt.so 00:19:28.989 SYMLINK libspdk_bdev_delay.so 00:19:28.989 CC module/bdev/malloc/bdev_malloc_rpc.o 00:19:28.989 LIB libspdk_bdev_error.a 00:19:28.989 SO libspdk_bdev_error.so.6.0 00:19:28.989 SYMLINK libspdk_bdev_error.so 00:19:28.989 CC module/bdev/raid/bdev_raid.o 00:19:28.989 CC module/bdev/raid/bdev_raid_rpc.o 00:19:29.247 LIB libspdk_bdev_lvol.a 00:19:29.247 CC module/bdev/null/bdev_null_rpc.o 00:19:29.247 CC module/bdev/raid/bdev_raid_sb.o 00:19:29.247 SO libspdk_bdev_lvol.so.6.0 00:19:29.247 CC module/bdev/split/vbdev_split.o 00:19:29.247 SYMLINK libspdk_bdev_lvol.so 00:19:29.247 LIB libspdk_bdev_passthru.a 00:19:29.247 CC module/bdev/zone_block/vbdev_zone_block.o 00:19:29.247 LIB libspdk_bdev_malloc.a 00:19:29.247 SO libspdk_bdev_passthru.so.6.0 00:19:29.247 LIB libspdk_bdev_null.a 00:19:29.247 SO libspdk_bdev_malloc.so.6.0 00:19:29.247 SO libspdk_bdev_null.so.6.0 00:19:29.247 CC module/bdev/raid/raid0.o 00:19:29.505 SYMLINK libspdk_bdev_passthru.so 00:19:29.505 SYMLINK libspdk_bdev_malloc.so 00:19:29.505 CC module/bdev/nvme/bdev_nvme_rpc.o 00:19:29.505 CC module/bdev/raid/raid1.o 00:19:29.505 CC module/bdev/xnvme/bdev_xnvme.o 00:19:29.505 SYMLINK libspdk_bdev_null.so 00:19:29.505 CC module/bdev/raid/concat.o 00:19:29.505 CC module/bdev/nvme/nvme_rpc.o 00:19:29.505 CC module/bdev/split/vbdev_split_rpc.o 00:19:29.764 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:19:29.764 CC module/bdev/nvme/bdev_mdns_client.o 00:19:29.764 LIB libspdk_bdev_split.a 00:19:29.764 CC module/bdev/nvme/vbdev_opal.o 00:19:29.764 SO libspdk_bdev_split.so.6.0 00:19:29.764 CC module/bdev/nvme/vbdev_opal_rpc.o 00:19:29.764 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:19:29.764 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:19:29.764 SYMLINK libspdk_bdev_split.so 00:19:29.764 LIB libspdk_bdev_zone_block.a 00:19:30.023 SO libspdk_bdev_zone_block.so.6.0 00:19:30.023 LIB libspdk_bdev_xnvme.a 00:19:30.023 SYMLINK libspdk_bdev_zone_block.so 00:19:30.023 SO libspdk_bdev_xnvme.so.3.0 00:19:30.023 CC module/bdev/aio/bdev_aio.o 00:19:30.023 CC module/bdev/aio/bdev_aio_rpc.o 00:19:30.023 SYMLINK libspdk_bdev_xnvme.so 00:19:30.023 CC module/bdev/ftl/bdev_ftl.o 00:19:30.023 CC module/bdev/ftl/bdev_ftl_rpc.o 00:19:30.302 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:19:30.302 CC module/bdev/iscsi/bdev_iscsi.o 00:19:30.302 CC module/bdev/virtio/bdev_virtio_scsi.o 00:19:30.302 CC module/bdev/virtio/bdev_virtio_blk.o 00:19:30.302 CC module/bdev/virtio/bdev_virtio_rpc.o 00:19:30.565 LIB libspdk_bdev_raid.a 00:19:30.565 LIB libspdk_bdev_ftl.a 00:19:30.565 LIB libspdk_bdev_aio.a 00:19:30.565 SO libspdk_bdev_ftl.so.6.0 00:19:30.565 SO libspdk_bdev_raid.so.6.0 00:19:30.565 SO libspdk_bdev_aio.so.6.0 00:19:30.565 SYMLINK libspdk_bdev_ftl.so 00:19:30.565 SYMLINK libspdk_bdev_aio.so 00:19:30.566 SYMLINK libspdk_bdev_raid.so 00:19:30.566 LIB libspdk_bdev_iscsi.a 00:19:30.566 SO libspdk_bdev_iscsi.so.6.0 00:19:30.824 SYMLINK libspdk_bdev_iscsi.so 00:19:30.824 LIB libspdk_bdev_virtio.a 00:19:30.824 SO libspdk_bdev_virtio.so.6.0 00:19:31.081 SYMLINK libspdk_bdev_virtio.so 00:19:32.992 LIB libspdk_bdev_nvme.a 00:19:32.992 SO libspdk_bdev_nvme.so.7.1 00:19:32.992 SYMLINK libspdk_bdev_nvme.so 00:19:33.634 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:19:33.634 CC module/event/subsystems/scheduler/scheduler.o 00:19:33.634 CC module/event/subsystems/fsdev/fsdev.o 00:19:33.634 CC module/event/subsystems/keyring/keyring.o 00:19:33.634 CC module/event/subsystems/vmd/vmd.o 00:19:33.634 CC module/event/subsystems/vmd/vmd_rpc.o 00:19:33.634 CC module/event/subsystems/iobuf/iobuf.o 00:19:33.634 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:19:33.634 CC module/event/subsystems/sock/sock.o 00:19:33.634 LIB libspdk_event_keyring.a 00:19:33.634 LIB libspdk_event_vhost_blk.a 00:19:33.634 LIB libspdk_event_fsdev.a 00:19:33.634 LIB libspdk_event_scheduler.a 00:19:33.634 LIB libspdk_event_vmd.a 00:19:33.634 SO libspdk_event_keyring.so.1.0 00:19:33.634 SO libspdk_event_scheduler.so.4.0 00:19:33.634 SO libspdk_event_vhost_blk.so.3.0 00:19:33.634 LIB libspdk_event_sock.a 00:19:33.634 SO libspdk_event_fsdev.so.1.0 00:19:33.634 SO libspdk_event_vmd.so.6.0 00:19:33.634 SO libspdk_event_sock.so.5.0 00:19:33.634 SYMLINK libspdk_event_keyring.so 00:19:33.634 LIB libspdk_event_iobuf.a 00:19:33.634 SYMLINK libspdk_event_scheduler.so 00:19:33.634 SYMLINK libspdk_event_vhost_blk.so 00:19:33.634 SYMLINK libspdk_event_fsdev.so 00:19:33.634 SYMLINK libspdk_event_vmd.so 00:19:33.634 SO libspdk_event_iobuf.so.3.0 00:19:33.634 SYMLINK libspdk_event_sock.so 00:19:33.893 SYMLINK libspdk_event_iobuf.so 00:19:34.152 CC module/event/subsystems/accel/accel.o 00:19:34.411 LIB libspdk_event_accel.a 00:19:34.411 SO libspdk_event_accel.so.6.0 00:19:34.411 SYMLINK libspdk_event_accel.so 00:19:34.979 CC module/event/subsystems/bdev/bdev.o 00:19:34.979 LIB libspdk_event_bdev.a 00:19:34.979 SO libspdk_event_bdev.so.6.0 00:19:35.238 SYMLINK libspdk_event_bdev.so 00:19:35.496 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:19:35.496 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:19:35.496 CC module/event/subsystems/scsi/scsi.o 00:19:35.496 CC module/event/subsystems/ublk/ublk.o 00:19:35.496 CC module/event/subsystems/nbd/nbd.o 00:19:35.755 LIB libspdk_event_nbd.a 00:19:35.755 LIB libspdk_event_ublk.a 00:19:35.755 SO libspdk_event_nbd.so.6.0 00:19:35.755 LIB libspdk_event_scsi.a 00:19:35.755 SO libspdk_event_ublk.so.3.0 00:19:35.755 LIB libspdk_event_nvmf.a 00:19:35.755 SO libspdk_event_scsi.so.6.0 00:19:35.755 SYMLINK libspdk_event_nbd.so 00:19:35.755 SYMLINK libspdk_event_ublk.so 00:19:35.755 SO libspdk_event_nvmf.so.6.0 00:19:35.755 SYMLINK libspdk_event_scsi.so 00:19:36.015 SYMLINK libspdk_event_nvmf.so 00:19:36.274 CC module/event/subsystems/iscsi/iscsi.o 00:19:36.274 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:19:36.274 LIB libspdk_event_vhost_scsi.a 00:19:36.274 LIB libspdk_event_iscsi.a 00:19:36.532 SO libspdk_event_vhost_scsi.so.3.0 00:19:36.532 SO libspdk_event_iscsi.so.6.0 00:19:36.532 SYMLINK libspdk_event_iscsi.so 00:19:36.532 SYMLINK libspdk_event_vhost_scsi.so 00:19:36.791 SO libspdk.so.6.0 00:19:36.791 SYMLINK libspdk.so 00:19:37.051 CC test/rpc_client/rpc_client_test.o 00:19:37.051 TEST_HEADER include/spdk/accel.h 00:19:37.051 TEST_HEADER include/spdk/accel_module.h 00:19:37.051 TEST_HEADER include/spdk/assert.h 00:19:37.051 TEST_HEADER include/spdk/barrier.h 00:19:37.051 TEST_HEADER include/spdk/base64.h 00:19:37.051 TEST_HEADER include/spdk/bdev.h 00:19:37.051 TEST_HEADER include/spdk/bdev_module.h 00:19:37.051 CXX app/trace/trace.o 00:19:37.051 TEST_HEADER include/spdk/bdev_zone.h 00:19:37.051 TEST_HEADER include/spdk/bit_array.h 00:19:37.051 TEST_HEADER include/spdk/bit_pool.h 00:19:37.051 TEST_HEADER include/spdk/blob_bdev.h 00:19:37.051 CC app/trace_record/trace_record.o 00:19:37.051 TEST_HEADER include/spdk/blobfs_bdev.h 00:19:37.051 TEST_HEADER include/spdk/blobfs.h 00:19:37.051 TEST_HEADER include/spdk/blob.h 00:19:37.051 TEST_HEADER include/spdk/conf.h 00:19:37.051 TEST_HEADER include/spdk/config.h 00:19:37.051 TEST_HEADER include/spdk/cpuset.h 00:19:37.051 TEST_HEADER include/spdk/crc16.h 00:19:37.051 TEST_HEADER include/spdk/crc32.h 00:19:37.051 TEST_HEADER include/spdk/crc64.h 00:19:37.051 TEST_HEADER include/spdk/dif.h 00:19:37.051 TEST_HEADER include/spdk/dma.h 00:19:37.051 TEST_HEADER include/spdk/endian.h 00:19:37.051 TEST_HEADER include/spdk/env_dpdk.h 00:19:37.051 TEST_HEADER include/spdk/env.h 00:19:37.051 TEST_HEADER include/spdk/event.h 00:19:37.051 TEST_HEADER include/spdk/fd_group.h 00:19:37.051 TEST_HEADER include/spdk/fd.h 00:19:37.051 TEST_HEADER include/spdk/file.h 00:19:37.051 TEST_HEADER include/spdk/fsdev.h 00:19:37.051 TEST_HEADER include/spdk/fsdev_module.h 00:19:37.051 TEST_HEADER include/spdk/ftl.h 00:19:37.051 TEST_HEADER include/spdk/fuse_dispatcher.h 00:19:37.051 TEST_HEADER include/spdk/gpt_spec.h 00:19:37.051 TEST_HEADER include/spdk/hexlify.h 00:19:37.051 TEST_HEADER include/spdk/histogram_data.h 00:19:37.051 TEST_HEADER include/spdk/idxd.h 00:19:37.051 TEST_HEADER include/spdk/idxd_spec.h 00:19:37.051 TEST_HEADER include/spdk/init.h 00:19:37.051 CC app/nvmf_tgt/nvmf_main.o 00:19:37.051 TEST_HEADER include/spdk/ioat.h 00:19:37.051 TEST_HEADER include/spdk/ioat_spec.h 00:19:37.051 TEST_HEADER include/spdk/iscsi_spec.h 00:19:37.051 TEST_HEADER include/spdk/json.h 00:19:37.051 TEST_HEADER include/spdk/jsonrpc.h 00:19:37.051 TEST_HEADER include/spdk/keyring.h 00:19:37.051 TEST_HEADER include/spdk/keyring_module.h 00:19:37.051 TEST_HEADER include/spdk/likely.h 00:19:37.051 CC test/thread/poller_perf/poller_perf.o 00:19:37.051 TEST_HEADER include/spdk/log.h 00:19:37.051 CC examples/util/zipf/zipf.o 00:19:37.051 TEST_HEADER include/spdk/lvol.h 00:19:37.051 TEST_HEADER include/spdk/md5.h 00:19:37.051 TEST_HEADER include/spdk/memory.h 00:19:37.051 TEST_HEADER include/spdk/mmio.h 00:19:37.051 TEST_HEADER include/spdk/nbd.h 00:19:37.051 TEST_HEADER include/spdk/net.h 00:19:37.051 TEST_HEADER include/spdk/notify.h 00:19:37.051 TEST_HEADER include/spdk/nvme.h 00:19:37.051 TEST_HEADER include/spdk/nvme_intel.h 00:19:37.051 TEST_HEADER include/spdk/nvme_ocssd.h 00:19:37.051 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:19:37.051 CC test/dma/test_dma/test_dma.o 00:19:37.051 TEST_HEADER include/spdk/nvme_spec.h 00:19:37.051 TEST_HEADER include/spdk/nvme_zns.h 00:19:37.051 TEST_HEADER include/spdk/nvmf_cmd.h 00:19:37.051 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:19:37.051 CC test/app/bdev_svc/bdev_svc.o 00:19:37.051 TEST_HEADER include/spdk/nvmf.h 00:19:37.051 TEST_HEADER include/spdk/nvmf_spec.h 00:19:37.051 TEST_HEADER include/spdk/nvmf_transport.h 00:19:37.051 TEST_HEADER include/spdk/opal.h 00:19:37.051 TEST_HEADER include/spdk/opal_spec.h 00:19:37.051 TEST_HEADER include/spdk/pci_ids.h 00:19:37.051 TEST_HEADER include/spdk/pipe.h 00:19:37.051 TEST_HEADER include/spdk/queue.h 00:19:37.051 TEST_HEADER include/spdk/reduce.h 00:19:37.051 TEST_HEADER include/spdk/rpc.h 00:19:37.052 TEST_HEADER include/spdk/scheduler.h 00:19:37.052 TEST_HEADER include/spdk/scsi.h 00:19:37.052 TEST_HEADER include/spdk/scsi_spec.h 00:19:37.052 TEST_HEADER include/spdk/sock.h 00:19:37.052 CC test/env/mem_callbacks/mem_callbacks.o 00:19:37.052 TEST_HEADER include/spdk/stdinc.h 00:19:37.052 TEST_HEADER include/spdk/string.h 00:19:37.052 TEST_HEADER include/spdk/thread.h 00:19:37.052 TEST_HEADER include/spdk/trace.h 00:19:37.052 TEST_HEADER include/spdk/trace_parser.h 00:19:37.052 TEST_HEADER include/spdk/tree.h 00:19:37.052 TEST_HEADER include/spdk/ublk.h 00:19:37.052 TEST_HEADER include/spdk/util.h 00:19:37.052 TEST_HEADER include/spdk/uuid.h 00:19:37.052 TEST_HEADER include/spdk/version.h 00:19:37.052 TEST_HEADER include/spdk/vfio_user_pci.h 00:19:37.310 TEST_HEADER include/spdk/vfio_user_spec.h 00:19:37.310 TEST_HEADER include/spdk/vhost.h 00:19:37.310 TEST_HEADER include/spdk/vmd.h 00:19:37.310 TEST_HEADER include/spdk/xor.h 00:19:37.310 TEST_HEADER include/spdk/zipf.h 00:19:37.310 CXX test/cpp_headers/accel.o 00:19:37.310 LINK rpc_client_test 00:19:37.310 LINK nvmf_tgt 00:19:37.310 LINK zipf 00:19:37.310 LINK poller_perf 00:19:37.310 LINK spdk_trace_record 00:19:37.310 LINK bdev_svc 00:19:37.568 LINK spdk_trace 00:19:37.568 CXX test/cpp_headers/accel_module.o 00:19:37.568 CC test/app/histogram_perf/histogram_perf.o 00:19:37.568 CC test/app/jsoncat/jsoncat.o 00:19:37.568 CC test/app/stub/stub.o 00:19:37.568 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:19:37.568 CXX test/cpp_headers/assert.o 00:19:37.568 CC examples/ioat/perf/perf.o 00:19:37.568 LINK test_dma 00:19:37.828 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:19:37.828 CC app/iscsi_tgt/iscsi_tgt.o 00:19:37.828 LINK jsoncat 00:19:37.828 LINK histogram_perf 00:19:37.828 LINK mem_callbacks 00:19:37.828 CXX test/cpp_headers/barrier.o 00:19:37.828 LINK stub 00:19:37.828 CXX test/cpp_headers/base64.o 00:19:38.087 LINK ioat_perf 00:19:38.087 LINK iscsi_tgt 00:19:38.087 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:19:38.087 CC test/env/vtophys/vtophys.o 00:19:38.087 CXX test/cpp_headers/bdev.o 00:19:38.087 LINK nvme_fuzz 00:19:38.087 CC test/event/event_perf/event_perf.o 00:19:38.088 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:19:38.088 CC test/nvme/aer/aer.o 00:19:38.088 LINK vtophys 00:19:38.088 CC examples/ioat/verify/verify.o 00:19:38.346 CC test/accel/dif/dif.o 00:19:38.346 CXX test/cpp_headers/bdev_module.o 00:19:38.346 LINK event_perf 00:19:38.346 CC app/spdk_tgt/spdk_tgt.o 00:19:38.346 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:19:38.346 CC app/spdk_lspci/spdk_lspci.o 00:19:38.346 LINK verify 00:19:38.604 CXX test/cpp_headers/bdev_zone.o 00:19:38.604 LINK aer 00:19:38.604 CC test/event/reactor/reactor.o 00:19:38.604 LINK spdk_lspci 00:19:38.604 LINK env_dpdk_post_init 00:19:38.604 LINK spdk_tgt 00:19:38.604 LINK vhost_fuzz 00:19:38.604 CXX test/cpp_headers/bit_array.o 00:19:38.862 LINK reactor 00:19:38.862 CC examples/vmd/lsvmd/lsvmd.o 00:19:38.862 CC test/nvme/reset/reset.o 00:19:38.862 CC test/nvme/sgl/sgl.o 00:19:38.862 CC test/env/memory/memory_ut.o 00:19:38.862 CXX test/cpp_headers/bit_pool.o 00:19:38.862 LINK lsvmd 00:19:38.862 CC test/nvme/e2edp/nvme_dp.o 00:19:38.862 CC app/spdk_nvme_perf/perf.o 00:19:39.121 CC test/event/reactor_perf/reactor_perf.o 00:19:39.121 CXX test/cpp_headers/blob_bdev.o 00:19:39.121 LINK reset 00:19:39.121 LINK dif 00:19:39.121 LINK sgl 00:19:39.380 CC examples/vmd/led/led.o 00:19:39.380 LINK reactor_perf 00:19:39.380 LINK nvme_dp 00:19:39.380 CXX test/cpp_headers/blobfs_bdev.o 00:19:39.380 CXX test/cpp_headers/blobfs.o 00:19:39.380 CC test/nvme/overhead/overhead.o 00:19:39.380 LINK led 00:19:39.380 CXX test/cpp_headers/blob.o 00:19:39.380 CXX test/cpp_headers/conf.o 00:19:39.704 CC test/event/app_repeat/app_repeat.o 00:19:39.704 CC app/spdk_nvme_identify/identify.o 00:19:39.704 CXX test/cpp_headers/config.o 00:19:39.704 CXX test/cpp_headers/cpuset.o 00:19:39.704 CC app/spdk_nvme_discover/discovery_aer.o 00:19:39.704 LINK app_repeat 00:19:39.704 CC examples/interrupt_tgt/interrupt_tgt.o 00:19:39.704 LINK overhead 00:19:39.704 CC examples/idxd/perf/perf.o 00:19:39.704 CXX test/cpp_headers/crc16.o 00:19:39.964 LINK iscsi_fuzz 00:19:39.964 LINK spdk_nvme_discover 00:19:39.964 LINK interrupt_tgt 00:19:39.964 CXX test/cpp_headers/crc32.o 00:19:39.964 LINK spdk_nvme_perf 00:19:39.964 CC test/nvme/err_injection/err_injection.o 00:19:39.964 CC test/event/scheduler/scheduler.o 00:19:40.222 CXX test/cpp_headers/crc64.o 00:19:40.222 LINK idxd_perf 00:19:40.222 LINK memory_ut 00:19:40.222 LINK err_injection 00:19:40.222 CC examples/sock/hello_world/hello_sock.o 00:19:40.222 CC examples/thread/thread/thread_ex.o 00:19:40.222 LINK scheduler 00:19:40.222 CXX test/cpp_headers/dif.o 00:19:40.480 CC test/blobfs/mkfs/mkfs.o 00:19:40.480 CC test/lvol/esnap/esnap.o 00:19:40.480 CXX test/cpp_headers/dma.o 00:19:40.480 CXX test/cpp_headers/endian.o 00:19:40.480 CC test/bdev/bdevio/bdevio.o 00:19:40.480 CC test/env/pci/pci_ut.o 00:19:40.480 CC test/nvme/startup/startup.o 00:19:40.480 LINK thread 00:19:40.480 LINK mkfs 00:19:40.480 LINK hello_sock 00:19:40.739 LINK spdk_nvme_identify 00:19:40.739 CXX test/cpp_headers/env_dpdk.o 00:19:40.739 LINK startup 00:19:40.739 CXX test/cpp_headers/env.o 00:19:40.739 CC app/spdk_top/spdk_top.o 00:19:40.998 CC app/vhost/vhost.o 00:19:40.998 CXX test/cpp_headers/event.o 00:19:40.998 CC examples/nvme/reconnect/reconnect.o 00:19:40.998 CC examples/nvme/hello_world/hello_world.o 00:19:40.998 LINK bdevio 00:19:40.998 CC app/spdk_dd/spdk_dd.o 00:19:40.998 LINK pci_ut 00:19:40.998 CC test/nvme/reserve/reserve.o 00:19:41.258 CXX test/cpp_headers/fd_group.o 00:19:41.258 LINK vhost 00:19:41.258 CXX test/cpp_headers/fd.o 00:19:41.258 LINK hello_world 00:19:41.258 LINK reserve 00:19:41.258 CC examples/nvme/nvme_manage/nvme_manage.o 00:19:41.516 CXX test/cpp_headers/file.o 00:19:41.516 LINK reconnect 00:19:41.516 LINK spdk_dd 00:19:41.516 CC examples/nvme/arbitration/arbitration.o 00:19:41.516 CXX test/cpp_headers/fsdev.o 00:19:41.516 CC app/fio/nvme/fio_plugin.o 00:19:41.516 CC test/nvme/simple_copy/simple_copy.o 00:19:41.516 CC examples/accel/perf/accel_perf.o 00:19:41.775 CC test/nvme/connect_stress/connect_stress.o 00:19:41.775 CC test/nvme/boot_partition/boot_partition.o 00:19:41.775 CXX test/cpp_headers/fsdev_module.o 00:19:41.775 LINK arbitration 00:19:41.775 LINK spdk_top 00:19:41.775 LINK simple_copy 00:19:42.034 LINK connect_stress 00:19:42.034 LINK boot_partition 00:19:42.034 CXX test/cpp_headers/ftl.o 00:19:42.034 LINK nvme_manage 00:19:42.294 CC test/nvme/compliance/nvme_compliance.o 00:19:42.294 CC examples/nvme/hotplug/hotplug.o 00:19:42.294 CXX test/cpp_headers/fuse_dispatcher.o 00:19:42.294 CC test/nvme/fused_ordering/fused_ordering.o 00:19:42.294 LINK spdk_nvme 00:19:42.294 LINK accel_perf 00:19:42.294 CC examples/nvme/cmb_copy/cmb_copy.o 00:19:42.294 CC examples/blob/hello_world/hello_blob.o 00:19:42.294 CC app/fio/bdev/fio_plugin.o 00:19:42.294 CXX test/cpp_headers/gpt_spec.o 00:19:42.553 CC examples/nvme/abort/abort.o 00:19:42.553 LINK hotplug 00:19:42.553 LINK fused_ordering 00:19:42.553 LINK cmb_copy 00:19:42.553 CXX test/cpp_headers/hexlify.o 00:19:42.553 LINK hello_blob 00:19:42.553 LINK nvme_compliance 00:19:42.812 CC examples/fsdev/hello_world/hello_fsdev.o 00:19:42.812 CXX test/cpp_headers/histogram_data.o 00:19:42.812 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:19:42.812 CXX test/cpp_headers/idxd.o 00:19:42.812 CC examples/blob/cli/blobcli.o 00:19:42.812 CC test/nvme/doorbell_aers/doorbell_aers.o 00:19:42.812 LINK abort 00:19:43.071 LINK spdk_bdev 00:19:43.071 CC examples/bdev/hello_world/hello_bdev.o 00:19:43.071 CXX test/cpp_headers/idxd_spec.o 00:19:43.071 LINK pmr_persistence 00:19:43.071 LINK hello_fsdev 00:19:43.071 CXX test/cpp_headers/init.o 00:19:43.071 CXX test/cpp_headers/ioat.o 00:19:43.071 CC examples/bdev/bdevperf/bdevperf.o 00:19:43.071 LINK doorbell_aers 00:19:43.330 LINK hello_bdev 00:19:43.330 CC test/nvme/fdp/fdp.o 00:19:43.330 CXX test/cpp_headers/ioat_spec.o 00:19:43.330 CXX test/cpp_headers/iscsi_spec.o 00:19:43.330 CXX test/cpp_headers/json.o 00:19:43.330 CXX test/cpp_headers/jsonrpc.o 00:19:43.330 CXX test/cpp_headers/keyring.o 00:19:43.588 CXX test/cpp_headers/keyring_module.o 00:19:43.588 CXX test/cpp_headers/likely.o 00:19:43.588 CXX test/cpp_headers/log.o 00:19:43.588 CXX test/cpp_headers/lvol.o 00:19:43.588 CXX test/cpp_headers/md5.o 00:19:43.588 LINK blobcli 00:19:43.588 CC test/nvme/cuse/cuse.o 00:19:43.588 CXX test/cpp_headers/memory.o 00:19:43.588 CXX test/cpp_headers/mmio.o 00:19:43.588 CXX test/cpp_headers/nbd.o 00:19:43.588 CXX test/cpp_headers/net.o 00:19:43.588 CXX test/cpp_headers/notify.o 00:19:43.588 LINK fdp 00:19:43.847 CXX test/cpp_headers/nvme.o 00:19:43.847 CXX test/cpp_headers/nvme_intel.o 00:19:43.847 CXX test/cpp_headers/nvme_ocssd.o 00:19:43.847 CXX test/cpp_headers/nvme_ocssd_spec.o 00:19:43.847 CXX test/cpp_headers/nvme_spec.o 00:19:43.847 CXX test/cpp_headers/nvme_zns.o 00:19:43.847 CXX test/cpp_headers/nvmf_cmd.o 00:19:43.847 CXX test/cpp_headers/nvmf_fc_spec.o 00:19:43.847 CXX test/cpp_headers/nvmf.o 00:19:44.106 CXX test/cpp_headers/nvmf_spec.o 00:19:44.106 CXX test/cpp_headers/nvmf_transport.o 00:19:44.106 CXX test/cpp_headers/opal.o 00:19:44.106 CXX test/cpp_headers/opal_spec.o 00:19:44.106 CXX test/cpp_headers/pci_ids.o 00:19:44.106 CXX test/cpp_headers/pipe.o 00:19:44.106 CXX test/cpp_headers/queue.o 00:19:44.106 CXX test/cpp_headers/reduce.o 00:19:44.106 CXX test/cpp_headers/rpc.o 00:19:44.106 CXX test/cpp_headers/scheduler.o 00:19:44.106 CXX test/cpp_headers/scsi.o 00:19:44.106 CXX test/cpp_headers/scsi_spec.o 00:19:44.363 CXX test/cpp_headers/sock.o 00:19:44.363 LINK bdevperf 00:19:44.363 CXX test/cpp_headers/stdinc.o 00:19:44.363 CXX test/cpp_headers/string.o 00:19:44.363 CXX test/cpp_headers/thread.o 00:19:44.363 CXX test/cpp_headers/trace.o 00:19:44.363 CXX test/cpp_headers/trace_parser.o 00:19:44.363 CXX test/cpp_headers/tree.o 00:19:44.363 CXX test/cpp_headers/ublk.o 00:19:44.363 CXX test/cpp_headers/util.o 00:19:44.363 CXX test/cpp_headers/uuid.o 00:19:44.363 CXX test/cpp_headers/version.o 00:19:44.656 CXX test/cpp_headers/vfio_user_pci.o 00:19:44.656 CXX test/cpp_headers/vfio_user_spec.o 00:19:44.656 CXX test/cpp_headers/vhost.o 00:19:44.656 CXX test/cpp_headers/vmd.o 00:19:44.656 CXX test/cpp_headers/xor.o 00:19:44.656 CXX test/cpp_headers/zipf.o 00:19:44.656 CC examples/nvmf/nvmf/nvmf.o 00:19:45.223 LINK cuse 00:19:45.223 LINK nvmf 00:19:47.125 LINK esnap 00:19:47.692 00:19:47.692 real 1m34.769s 00:19:47.692 user 8m39.378s 00:19:47.692 sys 1m46.115s 00:19:47.692 13:39:55 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:19:47.692 13:39:55 make -- common/autotest_common.sh@10 -- $ set +x 00:19:47.692 ************************************ 00:19:47.692 END TEST make 00:19:47.692 ************************************ 00:19:47.692 13:39:55 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:19:47.692 13:39:55 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:47.692 13:39:55 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:47.692 13:39:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.692 13:39:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:19:47.692 13:39:55 -- pm/common@44 -- $ pid=5508 00:19:47.692 13:39:55 -- pm/common@50 -- $ kill -TERM 5508 00:19:47.692 13:39:55 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.692 13:39:55 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:47.692 13:39:55 -- pm/common@44 -- $ pid=5510 00:19:47.692 13:39:55 -- pm/common@50 -- $ kill -TERM 5510 00:19:47.692 13:39:55 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:19:47.692 13:39:55 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:19:47.692 13:39:55 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:47.692 13:39:55 -- common/autotest_common.sh@1693 -- # lcov --version 00:19:47.692 13:39:55 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:47.692 13:39:55 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:47.692 13:39:55 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:47.692 13:39:55 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:47.692 13:39:55 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:47.693 13:39:55 -- scripts/common.sh@336 -- # IFS=.-: 00:19:47.693 13:39:55 -- scripts/common.sh@336 -- # read -ra ver1 00:19:47.693 13:39:55 -- scripts/common.sh@337 -- # IFS=.-: 00:19:47.693 13:39:55 -- scripts/common.sh@337 -- # read -ra ver2 00:19:47.693 13:39:55 -- scripts/common.sh@338 -- # local 'op=<' 00:19:47.693 13:39:55 -- scripts/common.sh@340 -- # ver1_l=2 00:19:47.693 13:39:55 -- scripts/common.sh@341 -- # ver2_l=1 00:19:47.693 13:39:55 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:47.693 13:39:55 -- scripts/common.sh@344 -- # case "$op" in 00:19:47.693 13:39:55 -- scripts/common.sh@345 -- # : 1 00:19:47.693 13:39:55 -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:47.693 13:39:55 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:47.693 13:39:55 -- scripts/common.sh@365 -- # decimal 1 00:19:47.693 13:39:55 -- scripts/common.sh@353 -- # local d=1 00:19:47.693 13:39:55 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:47.693 13:39:55 -- scripts/common.sh@355 -- # echo 1 00:19:47.693 13:39:55 -- scripts/common.sh@365 -- # ver1[v]=1 00:19:47.693 13:39:55 -- scripts/common.sh@366 -- # decimal 2 00:19:47.952 13:39:55 -- scripts/common.sh@353 -- # local d=2 00:19:47.952 13:39:55 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:47.952 13:39:55 -- scripts/common.sh@355 -- # echo 2 00:19:47.952 13:39:55 -- scripts/common.sh@366 -- # ver2[v]=2 00:19:47.952 13:39:55 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:47.952 13:39:55 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:47.952 13:39:55 -- scripts/common.sh@368 -- # return 0 00:19:47.952 13:39:55 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:47.952 13:39:55 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.952 --rc genhtml_branch_coverage=1 00:19:47.952 --rc genhtml_function_coverage=1 00:19:47.952 --rc genhtml_legend=1 00:19:47.952 --rc geninfo_all_blocks=1 00:19:47.952 --rc geninfo_unexecuted_blocks=1 00:19:47.952 00:19:47.952 ' 00:19:47.952 13:39:55 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.952 --rc genhtml_branch_coverage=1 00:19:47.952 --rc genhtml_function_coverage=1 00:19:47.952 --rc genhtml_legend=1 00:19:47.952 --rc geninfo_all_blocks=1 00:19:47.952 --rc geninfo_unexecuted_blocks=1 00:19:47.952 00:19:47.952 ' 00:19:47.952 13:39:55 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.952 --rc genhtml_branch_coverage=1 00:19:47.952 --rc genhtml_function_coverage=1 00:19:47.952 --rc genhtml_legend=1 00:19:47.952 --rc geninfo_all_blocks=1 00:19:47.952 --rc geninfo_unexecuted_blocks=1 00:19:47.952 00:19:47.952 ' 00:19:47.952 13:39:55 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:47.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:47.952 --rc genhtml_branch_coverage=1 00:19:47.952 --rc genhtml_function_coverage=1 00:19:47.952 --rc genhtml_legend=1 00:19:47.952 --rc geninfo_all_blocks=1 00:19:47.952 --rc geninfo_unexecuted_blocks=1 00:19:47.952 00:19:47.952 ' 00:19:47.952 13:39:55 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:47.952 13:39:55 -- nvmf/common.sh@7 -- # uname -s 00:19:47.952 13:39:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:47.952 13:39:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:47.952 13:39:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:47.952 13:39:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:47.952 13:39:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:47.952 13:39:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:47.952 13:39:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:47.952 13:39:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:47.952 13:39:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:47.952 13:39:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:47.952 13:39:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:19:47.952 13:39:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:19:47.952 13:39:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:47.952 13:39:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:47.952 13:39:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:47.952 13:39:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:47.952 13:39:55 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:47.952 13:39:55 -- scripts/common.sh@15 -- # shopt -s extglob 00:19:47.952 13:39:55 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:47.952 13:39:55 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:47.952 13:39:55 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:47.952 13:39:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.952 13:39:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.952 13:39:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.952 13:39:55 -- paths/export.sh@5 -- # export PATH 00:19:47.952 13:39:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:47.952 13:39:55 -- nvmf/common.sh@51 -- # : 0 00:19:47.952 13:39:55 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:47.952 13:39:55 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:47.952 13:39:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:47.952 13:39:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:47.952 13:39:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:47.952 13:39:55 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:47.952 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:47.952 13:39:55 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:47.952 13:39:55 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:47.952 13:39:55 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:47.952 13:39:55 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:19:47.952 13:39:55 -- spdk/autotest.sh@32 -- # uname -s 00:19:47.952 13:39:55 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:19:47.952 13:39:55 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:19:47.952 13:39:55 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:47.952 13:39:55 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:19:47.952 13:39:55 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:19:47.952 13:39:55 -- spdk/autotest.sh@44 -- # modprobe nbd 00:19:47.952 13:39:55 -- spdk/autotest.sh@46 -- # type -P udevadm 00:19:47.952 13:39:55 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:19:47.952 13:39:55 -- spdk/autotest.sh@48 -- # udevadm_pid=55082 00:19:47.952 13:39:55 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:19:47.952 13:39:55 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:19:47.952 13:39:55 -- pm/common@17 -- # local monitor 00:19:47.952 13:39:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.952 13:39:55 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:19:47.952 13:39:55 -- pm/common@21 -- # date +%s 00:19:47.952 13:39:55 -- pm/common@21 -- # date +%s 00:19:47.952 13:39:55 -- pm/common@25 -- # sleep 1 00:19:47.952 13:39:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109995 00:19:47.952 13:39:55 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732109995 00:19:47.952 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109995_collect-cpu-load.pm.log 00:19:47.952 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732109995_collect-vmstat.pm.log 00:19:48.887 13:39:56 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:19:48.887 13:39:56 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:19:48.887 13:39:56 -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:48.887 13:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:48.887 13:39:56 -- spdk/autotest.sh@59 -- # create_test_list 00:19:48.887 13:39:56 -- common/autotest_common.sh@752 -- # xtrace_disable 00:19:48.887 13:39:56 -- common/autotest_common.sh@10 -- # set +x 00:19:49.146 13:39:56 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:19:49.146 13:39:56 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:19:49.146 13:39:56 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:19:49.146 13:39:56 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:19:49.146 13:39:56 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:19:49.146 13:39:56 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:19:49.146 13:39:56 -- common/autotest_common.sh@1457 -- # uname 00:19:49.146 13:39:56 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:19:49.146 13:39:56 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:19:49.146 13:39:56 -- common/autotest_common.sh@1477 -- # uname 00:19:49.146 13:39:56 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:19:49.146 13:39:56 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:19:49.146 13:39:56 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:19:49.146 lcov: LCOV version 1.15 00:19:49.146 13:39:56 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:20:04.031 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:20:04.031 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:20:22.193 13:40:28 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:20:22.193 13:40:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:22.193 13:40:28 -- common/autotest_common.sh@10 -- # set +x 00:20:22.193 13:40:28 -- spdk/autotest.sh@78 -- # rm -f 00:20:22.193 13:40:28 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:22.193 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:22.193 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:20:22.193 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:20:22.193 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:20:22.193 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:20:22.193 13:40:29 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:20:22.193 13:40:29 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:20:22.193 13:40:29 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:20:22.193 13:40:29 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.193 13:40:29 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:20:22.193 13:40:29 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:20:22.193 13:40:29 -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:20:22.194 13:40:29 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:20:22.194 13:40:29 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:20:22.194 13:40:29 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:20:22.194 13:40:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.194 13:40:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.194 13:40:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:20:22.194 13:40:29 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:20:22.194 13:40:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:20:22.194 No valid GPT data, bailing 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # pt= 00:20:22.194 13:40:29 -- scripts/common.sh@395 -- # return 1 00:20:22.194 13:40:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:20:22.194 1+0 records in 00:20:22.194 1+0 records out 00:20:22.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179281 s, 58.5 MB/s 00:20:22.194 13:40:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.194 13:40:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.194 13:40:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:20:22.194 13:40:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:20:22.194 13:40:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:20:22.194 No valid GPT data, bailing 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # pt= 00:20:22.194 13:40:29 -- scripts/common.sh@395 -- # return 1 00:20:22.194 13:40:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:20:22.194 1+0 records in 00:20:22.194 1+0 records out 00:20:22.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00643173 s, 163 MB/s 00:20:22.194 13:40:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.194 13:40:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.194 13:40:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:20:22.194 13:40:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:20:22.194 13:40:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:20:22.194 No valid GPT data, bailing 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # pt= 00:20:22.194 13:40:29 -- scripts/common.sh@395 -- # return 1 00:20:22.194 13:40:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:20:22.194 1+0 records in 00:20:22.194 1+0 records out 00:20:22.194 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0067879 s, 154 MB/s 00:20:22.194 13:40:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.194 13:40:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.194 13:40:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:20:22.194 13:40:29 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:20:22.194 13:40:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:20:22.194 No valid GPT data, bailing 00:20:22.194 13:40:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:22.453 13:40:29 -- scripts/common.sh@394 -- # pt= 00:20:22.453 13:40:29 -- scripts/common.sh@395 -- # return 1 00:20:22.453 13:40:29 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:20:22.453 1+0 records in 00:20:22.453 1+0 records out 00:20:22.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0048046 s, 218 MB/s 00:20:22.453 13:40:29 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.453 13:40:29 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.453 13:40:29 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:20:22.453 13:40:29 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:20:22.453 13:40:29 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:20:22.453 No valid GPT data, bailing 00:20:22.453 13:40:29 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:20:22.453 13:40:30 -- scripts/common.sh@394 -- # pt= 00:20:22.453 13:40:30 -- scripts/common.sh@395 -- # return 1 00:20:22.453 13:40:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:20:22.453 1+0 records in 00:20:22.453 1+0 records out 00:20:22.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00658996 s, 159 MB/s 00:20:22.453 13:40:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:20:22.453 13:40:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:20:22.453 13:40:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:20:22.453 13:40:30 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:20:22.453 13:40:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:20:22.453 No valid GPT data, bailing 00:20:22.453 13:40:30 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:20:22.453 13:40:30 -- scripts/common.sh@394 -- # pt= 00:20:22.453 13:40:30 -- scripts/common.sh@395 -- # return 1 00:20:22.453 13:40:30 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:20:22.453 1+0 records in 00:20:22.453 1+0 records out 00:20:22.453 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00654185 s, 160 MB/s 00:20:22.453 13:40:30 -- spdk/autotest.sh@105 -- # sync 00:20:22.712 13:40:30 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:20:22.712 13:40:30 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:20:22.712 13:40:30 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:20:25.308 13:40:32 -- spdk/autotest.sh@111 -- # uname -s 00:20:25.308 13:40:32 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:20:25.308 13:40:32 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:20:25.308 13:40:32 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:20:25.892 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:26.462 Hugepages 00:20:26.462 node hugesize free / total 00:20:26.462 node0 1048576kB 0 / 0 00:20:26.462 node0 2048kB 0 / 0 00:20:26.462 00:20:26.462 Type BDF Vendor Device NUMA Driver Device Block devices 00:20:26.462 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:20:26.722 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:20:26.722 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:20:26.722 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:20:26.982 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:20:26.982 13:40:34 -- spdk/autotest.sh@117 -- # uname -s 00:20:26.982 13:40:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:20:26.982 13:40:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:20:26.982 13:40:34 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:27.549 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.143 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.143 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.143 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.402 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:28.402 13:40:36 -- common/autotest_common.sh@1517 -- # sleep 1 00:20:29.339 13:40:37 -- common/autotest_common.sh@1518 -- # bdfs=() 00:20:29.339 13:40:37 -- common/autotest_common.sh@1518 -- # local bdfs 00:20:29.339 13:40:37 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:20:29.339 13:40:37 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:20:29.339 13:40:37 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:29.339 13:40:37 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:29.339 13:40:37 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:29.339 13:40:37 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:29.339 13:40:37 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:29.598 13:40:37 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:29.598 13:40:37 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:29.598 13:40:37 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:30.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.425 Waiting for block devices as requested 00:20:30.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:30.425 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:30.425 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:20:30.685 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:20:36.025 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:20:36.025 13:40:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:36.025 13:40:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:36.026 13:40:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1543 -- # continue 00:20:36.026 13:40:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:36.026 13:40:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1543 -- # continue 00:20:36.026 13:40:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:36.026 13:40:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1543 -- # continue 00:20:36.026 13:40:43 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:20:36.026 13:40:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # grep oacs 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:20:36.026 13:40:43 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:20:36.026 13:40:43 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:20:36.026 13:40:43 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:20:36.026 13:40:43 -- common/autotest_common.sh@1543 -- # continue 00:20:36.026 13:40:43 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:20:36.026 13:40:43 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:36.026 13:40:43 -- common/autotest_common.sh@10 -- # set +x 00:20:36.026 13:40:43 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:20:36.026 13:40:43 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:36.026 13:40:43 -- common/autotest_common.sh@10 -- # set +x 00:20:36.026 13:40:43 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:36.595 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:37.531 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.531 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.531 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.531 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:20:37.531 13:40:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:20:37.531 13:40:45 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.531 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.531 13:40:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:20:37.531 13:40:45 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:20:37.531 13:40:45 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:20:37.531 13:40:45 -- common/autotest_common.sh@1563 -- # bdfs=() 00:20:37.531 13:40:45 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:20:37.531 13:40:45 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:20:37.531 13:40:45 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:20:37.531 13:40:45 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:20:37.531 13:40:45 -- common/autotest_common.sh@1498 -- # bdfs=() 00:20:37.531 13:40:45 -- common/autotest_common.sh@1498 -- # local bdfs 00:20:37.531 13:40:45 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:20:37.531 13:40:45 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:37.531 13:40:45 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:20:37.791 13:40:45 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:20:37.791 13:40:45 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:20:37.791 13:40:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:37.791 13:40:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:37.791 13:40:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:37.791 13:40:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:37.791 13:40:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:37.791 13:40:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:37.791 13:40:45 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:20:37.791 13:40:45 -- common/autotest_common.sh@1566 -- # device=0x0010 00:20:37.791 13:40:45 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:20:37.791 13:40:45 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:20:37.791 13:40:45 -- common/autotest_common.sh@1572 -- # return 0 00:20:37.791 13:40:45 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:20:37.791 13:40:45 -- common/autotest_common.sh@1580 -- # return 0 00:20:37.791 13:40:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:20:37.791 13:40:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:20:37.791 13:40:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:37.791 13:40:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:20:37.791 13:40:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:20:37.791 13:40:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.791 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.791 13:40:45 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:20:37.791 13:40:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:37.791 13:40:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:37.791 13:40:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:37.791 13:40:45 -- common/autotest_common.sh@10 -- # set +x 00:20:37.791 ************************************ 00:20:37.791 START TEST env 00:20:37.791 ************************************ 00:20:37.791 13:40:45 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:20:37.791 * Looking for test storage... 00:20:37.791 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:20:37.791 13:40:45 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:37.791 13:40:45 env -- common/autotest_common.sh@1693 -- # lcov --version 00:20:37.791 13:40:45 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:38.050 13:40:45 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:38.050 13:40:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:38.050 13:40:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:38.050 13:40:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:38.050 13:40:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:20:38.050 13:40:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:20:38.050 13:40:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:20:38.050 13:40:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:20:38.050 13:40:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:20:38.050 13:40:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:20:38.050 13:40:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:20:38.050 13:40:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:38.050 13:40:45 env -- scripts/common.sh@344 -- # case "$op" in 00:20:38.050 13:40:45 env -- scripts/common.sh@345 -- # : 1 00:20:38.050 13:40:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:38.050 13:40:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:38.050 13:40:45 env -- scripts/common.sh@365 -- # decimal 1 00:20:38.050 13:40:45 env -- scripts/common.sh@353 -- # local d=1 00:20:38.050 13:40:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:38.050 13:40:45 env -- scripts/common.sh@355 -- # echo 1 00:20:38.050 13:40:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:20:38.050 13:40:45 env -- scripts/common.sh@366 -- # decimal 2 00:20:38.050 13:40:45 env -- scripts/common.sh@353 -- # local d=2 00:20:38.050 13:40:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:38.050 13:40:45 env -- scripts/common.sh@355 -- # echo 2 00:20:38.050 13:40:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:20:38.050 13:40:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:38.050 13:40:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:38.050 13:40:45 env -- scripts/common.sh@368 -- # return 0 00:20:38.050 13:40:45 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:38.050 13:40:45 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:38.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.050 --rc genhtml_branch_coverage=1 00:20:38.050 --rc genhtml_function_coverage=1 00:20:38.050 --rc genhtml_legend=1 00:20:38.050 --rc geninfo_all_blocks=1 00:20:38.050 --rc geninfo_unexecuted_blocks=1 00:20:38.050 00:20:38.050 ' 00:20:38.051 13:40:45 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.051 --rc genhtml_branch_coverage=1 00:20:38.051 --rc genhtml_function_coverage=1 00:20:38.051 --rc genhtml_legend=1 00:20:38.051 --rc geninfo_all_blocks=1 00:20:38.051 --rc geninfo_unexecuted_blocks=1 00:20:38.051 00:20:38.051 ' 00:20:38.051 13:40:45 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.051 --rc genhtml_branch_coverage=1 00:20:38.051 --rc genhtml_function_coverage=1 00:20:38.051 --rc genhtml_legend=1 00:20:38.051 --rc geninfo_all_blocks=1 00:20:38.051 --rc geninfo_unexecuted_blocks=1 00:20:38.051 00:20:38.051 ' 00:20:38.051 13:40:45 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:38.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:38.051 --rc genhtml_branch_coverage=1 00:20:38.051 --rc genhtml_function_coverage=1 00:20:38.051 --rc genhtml_legend=1 00:20:38.051 --rc geninfo_all_blocks=1 00:20:38.051 --rc geninfo_unexecuted_blocks=1 00:20:38.051 00:20:38.051 ' 00:20:38.051 13:40:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:38.051 13:40:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:38.051 13:40:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.051 13:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:20:38.051 ************************************ 00:20:38.051 START TEST env_memory 00:20:38.051 ************************************ 00:20:38.051 13:40:45 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:20:38.051 00:20:38.051 00:20:38.051 CUnit - A unit testing framework for C - Version 2.1-3 00:20:38.051 http://cunit.sourceforge.net/ 00:20:38.051 00:20:38.051 00:20:38.051 Suite: memory 00:20:38.051 Test: alloc and free memory map ...[2024-11-20 13:40:45.679508] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:20:38.051 passed 00:20:38.051 Test: mem map translation ...[2024-11-20 13:40:45.729209] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:20:38.051 [2024-11-20 13:40:45.729257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:20:38.051 [2024-11-20 13:40:45.729318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:20:38.051 [2024-11-20 13:40:45.729339] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:20:38.311 passed 00:20:38.311 Test: mem map registration ...[2024-11-20 13:40:45.809389] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:20:38.311 [2024-11-20 13:40:45.809476] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:20:38.311 passed 00:20:38.311 Test: mem map adjacent registrations ...passed 00:20:38.311 00:20:38.311 Run Summary: Type Total Ran Passed Failed Inactive 00:20:38.311 suites 1 1 n/a 0 0 00:20:38.311 tests 4 4 4 0 0 00:20:38.311 asserts 152 152 152 0 n/a 00:20:38.311 00:20:38.311 Elapsed time = 0.276 seconds 00:20:38.311 00:20:38.311 real 0m0.322s 00:20:38.311 user 0m0.289s 00:20:38.311 sys 0m0.023s 00:20:38.311 13:40:45 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.311 13:40:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:20:38.311 ************************************ 00:20:38.311 END TEST env_memory 00:20:38.311 ************************************ 00:20:38.311 13:40:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:38.311 13:40:45 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:38.311 13:40:45 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.311 13:40:45 env -- common/autotest_common.sh@10 -- # set +x 00:20:38.311 ************************************ 00:20:38.311 START TEST env_vtophys 00:20:38.311 ************************************ 00:20:38.311 13:40:46 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:20:38.571 EAL: lib.eal log level changed from notice to debug 00:20:38.571 EAL: Detected lcore 0 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 1 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 2 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 3 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 4 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 5 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 6 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 7 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 8 as core 0 on socket 0 00:20:38.571 EAL: Detected lcore 9 as core 0 on socket 0 00:20:38.571 EAL: Maximum logical cores by configuration: 128 00:20:38.571 EAL: Detected CPU lcores: 10 00:20:38.571 EAL: Detected NUMA nodes: 1 00:20:38.571 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:20:38.571 EAL: Detected shared linkage of DPDK 00:20:38.571 EAL: No shared files mode enabled, IPC will be disabled 00:20:38.571 EAL: Selected IOVA mode 'PA' 00:20:38.571 EAL: Probing VFIO support... 00:20:38.571 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:38.571 EAL: VFIO modules not loaded, skipping VFIO support... 00:20:38.571 EAL: Ask a virtual area of 0x2e000 bytes 00:20:38.571 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:20:38.571 EAL: Setting up physically contiguous memory... 00:20:38.571 EAL: Setting maximum number of open files to 524288 00:20:38.571 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:20:38.571 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:20:38.571 EAL: Ask a virtual area of 0x61000 bytes 00:20:38.571 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:20:38.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:38.571 EAL: Ask a virtual area of 0x400000000 bytes 00:20:38.571 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:20:38.571 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:20:38.571 EAL: Ask a virtual area of 0x61000 bytes 00:20:38.571 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:20:38.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:38.571 EAL: Ask a virtual area of 0x400000000 bytes 00:20:38.571 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:20:38.571 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:20:38.571 EAL: Ask a virtual area of 0x61000 bytes 00:20:38.571 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:20:38.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:38.571 EAL: Ask a virtual area of 0x400000000 bytes 00:20:38.571 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:20:38.571 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:20:38.571 EAL: Ask a virtual area of 0x61000 bytes 00:20:38.571 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:20:38.571 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:20:38.571 EAL: Ask a virtual area of 0x400000000 bytes 00:20:38.571 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:20:38.571 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:20:38.571 EAL: Hugepages will be freed exactly as allocated. 00:20:38.571 EAL: No shared files mode enabled, IPC is disabled 00:20:38.571 EAL: No shared files mode enabled, IPC is disabled 00:20:38.571 EAL: TSC frequency is ~2290000 KHz 00:20:38.571 EAL: Main lcore 0 is ready (tid=7f96548daa40;cpuset=[0]) 00:20:38.571 EAL: Trying to obtain current memory policy. 00:20:38.571 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:38.571 EAL: Restoring previous memory policy: 0 00:20:38.571 EAL: request: mp_malloc_sync 00:20:38.571 EAL: No shared files mode enabled, IPC is disabled 00:20:38.571 EAL: Heap on socket 0 was expanded by 2MB 00:20:38.571 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:20:38.571 EAL: No PCI address specified using 'addr=' in: bus=pci 00:20:38.571 EAL: Mem event callback 'spdk:(nil)' registered 00:20:38.571 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:20:38.571 00:20:38.571 00:20:38.571 CUnit - A unit testing framework for C - Version 2.1-3 00:20:38.571 http://cunit.sourceforge.net/ 00:20:38.571 00:20:38.571 00:20:38.571 Suite: components_suite 00:20:39.140 Test: vtophys_malloc_test ...passed 00:20:39.140 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:20:39.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.140 EAL: Restoring previous memory policy: 4 00:20:39.140 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.140 EAL: request: mp_malloc_sync 00:20:39.140 EAL: No shared files mode enabled, IPC is disabled 00:20:39.140 EAL: Heap on socket 0 was expanded by 4MB 00:20:39.140 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.140 EAL: request: mp_malloc_sync 00:20:39.140 EAL: No shared files mode enabled, IPC is disabled 00:20:39.140 EAL: Heap on socket 0 was shrunk by 4MB 00:20:39.140 EAL: Trying to obtain current memory policy. 00:20:39.140 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.140 EAL: Restoring previous memory policy: 4 00:20:39.140 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.140 EAL: request: mp_malloc_sync 00:20:39.140 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was expanded by 6MB 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was shrunk by 6MB 00:20:39.141 EAL: Trying to obtain current memory policy. 00:20:39.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.141 EAL: Restoring previous memory policy: 4 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was expanded by 10MB 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was shrunk by 10MB 00:20:39.141 EAL: Trying to obtain current memory policy. 00:20:39.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.141 EAL: Restoring previous memory policy: 4 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was expanded by 18MB 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was shrunk by 18MB 00:20:39.141 EAL: Trying to obtain current memory policy. 00:20:39.141 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.141 EAL: Restoring previous memory policy: 4 00:20:39.141 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.141 EAL: request: mp_malloc_sync 00:20:39.141 EAL: No shared files mode enabled, IPC is disabled 00:20:39.141 EAL: Heap on socket 0 was expanded by 34MB 00:20:39.400 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.400 EAL: request: mp_malloc_sync 00:20:39.400 EAL: No shared files mode enabled, IPC is disabled 00:20:39.400 EAL: Heap on socket 0 was shrunk by 34MB 00:20:39.400 EAL: Trying to obtain current memory policy. 00:20:39.400 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.400 EAL: Restoring previous memory policy: 4 00:20:39.400 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.400 EAL: request: mp_malloc_sync 00:20:39.400 EAL: No shared files mode enabled, IPC is disabled 00:20:39.400 EAL: Heap on socket 0 was expanded by 66MB 00:20:39.400 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.400 EAL: request: mp_malloc_sync 00:20:39.400 EAL: No shared files mode enabled, IPC is disabled 00:20:39.400 EAL: Heap on socket 0 was shrunk by 66MB 00:20:39.659 EAL: Trying to obtain current memory policy. 00:20:39.659 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:39.659 EAL: Restoring previous memory policy: 4 00:20:39.659 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.659 EAL: request: mp_malloc_sync 00:20:39.659 EAL: No shared files mode enabled, IPC is disabled 00:20:39.659 EAL: Heap on socket 0 was expanded by 130MB 00:20:39.917 EAL: Calling mem event callback 'spdk:(nil)' 00:20:39.917 EAL: request: mp_malloc_sync 00:20:39.917 EAL: No shared files mode enabled, IPC is disabled 00:20:39.917 EAL: Heap on socket 0 was shrunk by 130MB 00:20:40.177 EAL: Trying to obtain current memory policy. 00:20:40.177 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:40.177 EAL: Restoring previous memory policy: 4 00:20:40.177 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.177 EAL: request: mp_malloc_sync 00:20:40.177 EAL: No shared files mode enabled, IPC is disabled 00:20:40.177 EAL: Heap on socket 0 was expanded by 258MB 00:20:40.745 EAL: Calling mem event callback 'spdk:(nil)' 00:20:40.745 EAL: request: mp_malloc_sync 00:20:40.745 EAL: No shared files mode enabled, IPC is disabled 00:20:40.745 EAL: Heap on socket 0 was shrunk by 258MB 00:20:41.314 EAL: Trying to obtain current memory policy. 00:20:41.314 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:41.314 EAL: Restoring previous memory policy: 4 00:20:41.314 EAL: Calling mem event callback 'spdk:(nil)' 00:20:41.314 EAL: request: mp_malloc_sync 00:20:41.314 EAL: No shared files mode enabled, IPC is disabled 00:20:41.314 EAL: Heap on socket 0 was expanded by 514MB 00:20:42.319 EAL: Calling mem event callback 'spdk:(nil)' 00:20:42.319 EAL: request: mp_malloc_sync 00:20:42.319 EAL: No shared files mode enabled, IPC is disabled 00:20:42.319 EAL: Heap on socket 0 was shrunk by 514MB 00:20:43.256 EAL: Trying to obtain current memory policy. 00:20:43.256 EAL: Setting policy MPOL_PREFERRED for socket 0 00:20:43.516 EAL: Restoring previous memory policy: 4 00:20:43.516 EAL: Calling mem event callback 'spdk:(nil)' 00:20:43.516 EAL: request: mp_malloc_sync 00:20:43.516 EAL: No shared files mode enabled, IPC is disabled 00:20:43.516 EAL: Heap on socket 0 was expanded by 1026MB 00:20:45.421 EAL: Calling mem event callback 'spdk:(nil)' 00:20:45.680 EAL: request: mp_malloc_sync 00:20:45.680 EAL: No shared files mode enabled, IPC is disabled 00:20:45.680 EAL: Heap on socket 0 was shrunk by 1026MB 00:20:47.587 passed 00:20:47.587 00:20:47.587 Run Summary: Type Total Ran Passed Failed Inactive 00:20:47.587 suites 1 1 n/a 0 0 00:20:47.587 tests 2 2 2 0 0 00:20:47.587 asserts 5726 5726 5726 0 n/a 00:20:47.587 00:20:47.587 Elapsed time = 8.889 seconds 00:20:47.587 EAL: Calling mem event callback 'spdk:(nil)' 00:20:47.587 EAL: request: mp_malloc_sync 00:20:47.587 EAL: No shared files mode enabled, IPC is disabled 00:20:47.587 EAL: Heap on socket 0 was shrunk by 2MB 00:20:47.587 EAL: No shared files mode enabled, IPC is disabled 00:20:47.587 EAL: No shared files mode enabled, IPC is disabled 00:20:47.587 EAL: No shared files mode enabled, IPC is disabled 00:20:47.587 00:20:47.587 real 0m9.235s 00:20:47.587 user 0m8.155s 00:20:47.587 sys 0m0.910s 00:20:47.587 13:40:55 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.587 13:40:55 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:20:47.587 ************************************ 00:20:47.587 END TEST env_vtophys 00:20:47.587 ************************************ 00:20:47.587 13:40:55 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:47.587 13:40:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.587 13:40:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.587 13:40:55 env -- common/autotest_common.sh@10 -- # set +x 00:20:47.587 ************************************ 00:20:47.587 START TEST env_pci 00:20:47.587 ************************************ 00:20:47.587 13:40:55 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:20:47.847 00:20:47.847 00:20:47.847 CUnit - A unit testing framework for C - Version 2.1-3 00:20:47.847 http://cunit.sourceforge.net/ 00:20:47.847 00:20:47.847 00:20:47.847 Suite: pci 00:20:47.847 Test: pci_hook ...[2024-11-20 13:40:55.337360] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57950 has claimed it 00:20:47.847 passed 00:20:47.847 00:20:47.847 Run Summary: Type Total Ran Passed Failed Inactive 00:20:47.847 suites 1 1 n/a 0 0 00:20:47.847 tests 1 1 1 0 0 00:20:47.847 asserts 25 25 25 0 n/a 00:20:47.847 00:20:47.847 Elapsed time = 0.009 seconds 00:20:47.847 EAL: Cannot find device (10000:00:01.0) 00:20:47.847 EAL: Failed to attach device on primary process 00:20:47.847 00:20:47.847 real 0m0.093s 00:20:47.847 user 0m0.037s 00:20:47.847 sys 0m0.054s 00:20:47.847 13:40:55 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.847 13:40:55 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:20:47.847 ************************************ 00:20:47.847 END TEST env_pci 00:20:47.847 ************************************ 00:20:47.847 13:40:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:20:47.847 13:40:55 env -- env/env.sh@15 -- # uname 00:20:47.847 13:40:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:20:47.847 13:40:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:20:47.847 13:40:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:47.847 13:40:55 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:47.847 13:40:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.847 13:40:55 env -- common/autotest_common.sh@10 -- # set +x 00:20:47.847 ************************************ 00:20:47.847 START TEST env_dpdk_post_init 00:20:47.847 ************************************ 00:20:47.847 13:40:55 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:20:47.847 EAL: Detected CPU lcores: 10 00:20:47.847 EAL: Detected NUMA nodes: 1 00:20:47.847 EAL: Detected shared linkage of DPDK 00:20:47.847 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:47.847 EAL: Selected IOVA mode 'PA' 00:20:48.106 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:48.106 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:20:48.106 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:20:48.106 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:20:48.106 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:20:48.106 Starting DPDK initialization... 00:20:48.106 Starting SPDK post initialization... 00:20:48.106 SPDK NVMe probe 00:20:48.106 Attaching to 0000:00:10.0 00:20:48.106 Attaching to 0000:00:11.0 00:20:48.106 Attaching to 0000:00:12.0 00:20:48.106 Attaching to 0000:00:13.0 00:20:48.106 Attached to 0000:00:10.0 00:20:48.106 Attached to 0000:00:11.0 00:20:48.106 Attached to 0000:00:13.0 00:20:48.106 Attached to 0000:00:12.0 00:20:48.106 Cleaning up... 00:20:48.106 00:20:48.106 real 0m0.300s 00:20:48.106 user 0m0.107s 00:20:48.106 sys 0m0.098s 00:20:48.106 13:40:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.106 13:40:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:20:48.106 ************************************ 00:20:48.106 END TEST env_dpdk_post_init 00:20:48.106 ************************************ 00:20:48.106 13:40:55 env -- env/env.sh@26 -- # uname 00:20:48.106 13:40:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:20:48.106 13:40:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:48.106 13:40:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.106 13:40:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.106 13:40:55 env -- common/autotest_common.sh@10 -- # set +x 00:20:48.106 ************************************ 00:20:48.106 START TEST env_mem_callbacks 00:20:48.106 ************************************ 00:20:48.106 13:40:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:20:48.365 EAL: Detected CPU lcores: 10 00:20:48.365 EAL: Detected NUMA nodes: 1 00:20:48.365 EAL: Detected shared linkage of DPDK 00:20:48.365 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:20:48.365 EAL: Selected IOVA mode 'PA' 00:20:48.365 00:20:48.365 00:20:48.365 CUnit - A unit testing framework for C - Version 2.1-3 00:20:48.365 http://cunit.sourceforge.net/ 00:20:48.365 00:20:48.365 00:20:48.365 Suite: memory 00:20:48.365 Test: test ... 00:20:48.365 register 0x200000200000 2097152 00:20:48.365 malloc 3145728 00:20:48.365 TELEMETRY: No legacy callbacks, legacy socket not created 00:20:48.365 register 0x200000400000 4194304 00:20:48.365 buf 0x2000004fffc0 len 3145728 PASSED 00:20:48.365 malloc 64 00:20:48.365 buf 0x2000004ffec0 len 64 PASSED 00:20:48.365 malloc 4194304 00:20:48.365 register 0x200000800000 6291456 00:20:48.365 buf 0x2000009fffc0 len 4194304 PASSED 00:20:48.365 free 0x2000004fffc0 3145728 00:20:48.365 free 0x2000004ffec0 64 00:20:48.365 unregister 0x200000400000 4194304 PASSED 00:20:48.365 free 0x2000009fffc0 4194304 00:20:48.365 unregister 0x200000800000 6291456 PASSED 00:20:48.365 malloc 8388608 00:20:48.365 register 0x200000400000 10485760 00:20:48.365 buf 0x2000005fffc0 len 8388608 PASSED 00:20:48.365 free 0x2000005fffc0 8388608 00:20:48.365 unregister 0x200000400000 10485760 PASSED 00:20:48.623 passed 00:20:48.623 00:20:48.623 Run Summary: Type Total Ran Passed Failed Inactive 00:20:48.623 suites 1 1 n/a 0 0 00:20:48.623 tests 1 1 1 0 0 00:20:48.623 asserts 15 15 15 0 n/a 00:20:48.623 00:20:48.623 Elapsed time = 0.096 seconds 00:20:48.623 00:20:48.624 real 0m0.301s 00:20:48.624 user 0m0.124s 00:20:48.624 sys 0m0.075s 00:20:48.624 13:40:56 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.624 13:40:56 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:20:48.624 ************************************ 00:20:48.624 END TEST env_mem_callbacks 00:20:48.624 ************************************ 00:20:48.624 00:20:48.624 real 0m10.783s 00:20:48.624 user 0m8.935s 00:20:48.624 sys 0m1.487s 00:20:48.624 13:40:56 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:48.624 13:40:56 env -- common/autotest_common.sh@10 -- # set +x 00:20:48.624 ************************************ 00:20:48.624 END TEST env 00:20:48.624 ************************************ 00:20:48.624 13:40:56 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:48.624 13:40:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:48.624 13:40:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.624 13:40:56 -- common/autotest_common.sh@10 -- # set +x 00:20:48.624 ************************************ 00:20:48.624 START TEST rpc 00:20:48.624 ************************************ 00:20:48.624 13:40:56 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:20:48.624 * Looking for test storage... 00:20:48.624 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:48.624 13:40:56 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:48.624 13:40:56 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:48.624 13:40:56 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:48.882 13:40:56 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.882 13:40:56 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.882 13:40:56 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.882 13:40:56 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.882 13:40:56 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.882 13:40:56 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.882 13:40:56 rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:48.882 13:40:56 rpc -- scripts/common.sh@345 -- # : 1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.882 13:40:56 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.882 13:40:56 rpc -- scripts/common.sh@365 -- # decimal 1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@353 -- # local d=1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.882 13:40:56 rpc -- scripts/common.sh@355 -- # echo 1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.882 13:40:56 rpc -- scripts/common.sh@366 -- # decimal 2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@353 -- # local d=2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.882 13:40:56 rpc -- scripts/common.sh@355 -- # echo 2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.882 13:40:56 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.882 13:40:56 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.883 13:40:56 rpc -- scripts/common.sh@368 -- # return 0 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.883 --rc genhtml_branch_coverage=1 00:20:48.883 --rc genhtml_function_coverage=1 00:20:48.883 --rc genhtml_legend=1 00:20:48.883 --rc geninfo_all_blocks=1 00:20:48.883 --rc geninfo_unexecuted_blocks=1 00:20:48.883 00:20:48.883 ' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.883 --rc genhtml_branch_coverage=1 00:20:48.883 --rc genhtml_function_coverage=1 00:20:48.883 --rc genhtml_legend=1 00:20:48.883 --rc geninfo_all_blocks=1 00:20:48.883 --rc geninfo_unexecuted_blocks=1 00:20:48.883 00:20:48.883 ' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.883 --rc genhtml_branch_coverage=1 00:20:48.883 --rc genhtml_function_coverage=1 00:20:48.883 --rc genhtml_legend=1 00:20:48.883 --rc geninfo_all_blocks=1 00:20:48.883 --rc geninfo_unexecuted_blocks=1 00:20:48.883 00:20:48.883 ' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:48.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.883 --rc genhtml_branch_coverage=1 00:20:48.883 --rc genhtml_function_coverage=1 00:20:48.883 --rc genhtml_legend=1 00:20:48.883 --rc geninfo_all_blocks=1 00:20:48.883 --rc geninfo_unexecuted_blocks=1 00:20:48.883 00:20:48.883 ' 00:20:48.883 13:40:56 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58077 00:20:48.883 13:40:56 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:48.883 13:40:56 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:20:48.883 13:40:56 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58077 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@835 -- # '[' -z 58077 ']' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.883 13:40:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:48.883 [2024-11-20 13:40:56.523575] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:20:48.883 [2024-11-20 13:40:56.523734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58077 ] 00:20:49.143 [2024-11-20 13:40:56.709808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.143 [2024-11-20 13:40:56.850145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:20:49.143 [2024-11-20 13:40:56.850230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58077' to capture a snapshot of events at runtime. 00:20:49.143 [2024-11-20 13:40:56.850242] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.143 [2024-11-20 13:40:56.850269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.143 [2024-11-20 13:40:56.850279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58077 for offline analysis/debug. 00:20:49.143 [2024-11-20 13:40:56.851916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:50.522 13:40:57 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:50.522 13:40:57 rpc -- common/autotest_common.sh@868 -- # return 0 00:20:50.522 13:40:57 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:50.522 13:40:57 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:20:50.522 13:40:57 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:20:50.522 13:40:57 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:20:50.522 13:40:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.522 13:40:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.522 13:40:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.522 ************************************ 00:20:50.522 START TEST rpc_integrity 00:20:50.522 ************************************ 00:20:50.522 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:50.522 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:50.522 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.522 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.522 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.522 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:50.522 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:50.523 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:50.523 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:50.523 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.523 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.523 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:20:50.523 13:40:57 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:50.523 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.523 13:40:57 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:50.523 { 00:20:50.523 "name": "Malloc0", 00:20:50.523 "aliases": [ 00:20:50.523 "c99e699b-2774-4db6-9a8e-b684670b6963" 00:20:50.523 ], 00:20:50.523 "product_name": "Malloc disk", 00:20:50.523 "block_size": 512, 00:20:50.523 "num_blocks": 16384, 00:20:50.523 "uuid": "c99e699b-2774-4db6-9a8e-b684670b6963", 00:20:50.523 "assigned_rate_limits": { 00:20:50.523 "rw_ios_per_sec": 0, 00:20:50.523 "rw_mbytes_per_sec": 0, 00:20:50.523 "r_mbytes_per_sec": 0, 00:20:50.523 "w_mbytes_per_sec": 0 00:20:50.523 }, 00:20:50.523 "claimed": false, 00:20:50.523 "zoned": false, 00:20:50.523 "supported_io_types": { 00:20:50.523 "read": true, 00:20:50.523 "write": true, 00:20:50.523 "unmap": true, 00:20:50.523 "flush": true, 00:20:50.523 "reset": true, 00:20:50.523 "nvme_admin": false, 00:20:50.523 "nvme_io": false, 00:20:50.523 "nvme_io_md": false, 00:20:50.523 "write_zeroes": true, 00:20:50.523 "zcopy": true, 00:20:50.523 "get_zone_info": false, 00:20:50.523 "zone_management": false, 00:20:50.523 "zone_append": false, 00:20:50.523 "compare": false, 00:20:50.523 "compare_and_write": false, 00:20:50.523 "abort": true, 00:20:50.523 "seek_hole": false, 00:20:50.523 "seek_data": false, 00:20:50.523 "copy": true, 00:20:50.523 "nvme_iov_md": false 00:20:50.523 }, 00:20:50.523 "memory_domains": [ 00:20:50.523 { 00:20:50.523 "dma_device_id": "system", 00:20:50.523 "dma_device_type": 1 00:20:50.523 }, 00:20:50.523 { 00:20:50.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.523 "dma_device_type": 2 00:20:50.523 } 00:20:50.523 ], 00:20:50.523 "driver_specific": {} 00:20:50.523 } 00:20:50.523 ]' 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 [2024-11-20 13:40:58.061469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:20:50.523 [2024-11-20 13:40:58.061568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.523 [2024-11-20 13:40:58.061609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:20:50.523 [2024-11-20 13:40:58.061623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.523 [2024-11-20 13:40:58.064556] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.523 [2024-11-20 13:40:58.064614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:50.523 Passthru0 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.523 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.523 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:50.523 { 00:20:50.523 "name": "Malloc0", 00:20:50.523 "aliases": [ 00:20:50.523 "c99e699b-2774-4db6-9a8e-b684670b6963" 00:20:50.523 ], 00:20:50.523 "product_name": "Malloc disk", 00:20:50.523 "block_size": 512, 00:20:50.523 "num_blocks": 16384, 00:20:50.523 "uuid": "c99e699b-2774-4db6-9a8e-b684670b6963", 00:20:50.523 "assigned_rate_limits": { 00:20:50.523 "rw_ios_per_sec": 0, 00:20:50.523 "rw_mbytes_per_sec": 0, 00:20:50.523 "r_mbytes_per_sec": 0, 00:20:50.523 "w_mbytes_per_sec": 0 00:20:50.523 }, 00:20:50.523 "claimed": true, 00:20:50.523 "claim_type": "exclusive_write", 00:20:50.523 "zoned": false, 00:20:50.523 "supported_io_types": { 00:20:50.523 "read": true, 00:20:50.523 "write": true, 00:20:50.523 "unmap": true, 00:20:50.523 "flush": true, 00:20:50.523 "reset": true, 00:20:50.523 "nvme_admin": false, 00:20:50.523 "nvme_io": false, 00:20:50.523 "nvme_io_md": false, 00:20:50.523 "write_zeroes": true, 00:20:50.523 "zcopy": true, 00:20:50.523 "get_zone_info": false, 00:20:50.523 "zone_management": false, 00:20:50.523 "zone_append": false, 00:20:50.523 "compare": false, 00:20:50.523 "compare_and_write": false, 00:20:50.523 "abort": true, 00:20:50.523 "seek_hole": false, 00:20:50.523 "seek_data": false, 00:20:50.523 "copy": true, 00:20:50.523 "nvme_iov_md": false 00:20:50.523 }, 00:20:50.523 "memory_domains": [ 00:20:50.523 { 00:20:50.523 "dma_device_id": "system", 00:20:50.523 "dma_device_type": 1 00:20:50.523 }, 00:20:50.523 { 00:20:50.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.523 "dma_device_type": 2 00:20:50.523 } 00:20:50.523 ], 00:20:50.523 "driver_specific": {} 00:20:50.523 }, 00:20:50.523 { 00:20:50.523 "name": "Passthru0", 00:20:50.523 "aliases": [ 00:20:50.523 "7346fbb0-fe09-587d-8383-8f903337c186" 00:20:50.523 ], 00:20:50.523 "product_name": "passthru", 00:20:50.523 "block_size": 512, 00:20:50.523 "num_blocks": 16384, 00:20:50.523 "uuid": "7346fbb0-fe09-587d-8383-8f903337c186", 00:20:50.523 "assigned_rate_limits": { 00:20:50.523 "rw_ios_per_sec": 0, 00:20:50.523 "rw_mbytes_per_sec": 0, 00:20:50.523 "r_mbytes_per_sec": 0, 00:20:50.523 "w_mbytes_per_sec": 0 00:20:50.523 }, 00:20:50.523 "claimed": false, 00:20:50.523 "zoned": false, 00:20:50.523 "supported_io_types": { 00:20:50.523 "read": true, 00:20:50.523 "write": true, 00:20:50.523 "unmap": true, 00:20:50.523 "flush": true, 00:20:50.523 "reset": true, 00:20:50.523 "nvme_admin": false, 00:20:50.523 "nvme_io": false, 00:20:50.524 "nvme_io_md": false, 00:20:50.524 "write_zeroes": true, 00:20:50.524 "zcopy": true, 00:20:50.524 "get_zone_info": false, 00:20:50.524 "zone_management": false, 00:20:50.524 "zone_append": false, 00:20:50.524 "compare": false, 00:20:50.524 "compare_and_write": false, 00:20:50.524 "abort": true, 00:20:50.524 "seek_hole": false, 00:20:50.524 "seek_data": false, 00:20:50.524 "copy": true, 00:20:50.524 "nvme_iov_md": false 00:20:50.524 }, 00:20:50.524 "memory_domains": [ 00:20:50.524 { 00:20:50.524 "dma_device_id": "system", 00:20:50.524 "dma_device_type": 1 00:20:50.524 }, 00:20:50.524 { 00:20:50.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.524 "dma_device_type": 2 00:20:50.524 } 00:20:50.524 ], 00:20:50.524 "driver_specific": { 00:20:50.524 "passthru": { 00:20:50.524 "name": "Passthru0", 00:20:50.524 "base_bdev_name": "Malloc0" 00:20:50.524 } 00:20:50.524 } 00:20:50.524 } 00:20:50.524 ]' 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.524 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:50.524 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:50.783 ************************************ 00:20:50.783 END TEST rpc_integrity 00:20:50.783 ************************************ 00:20:50.783 13:40:58 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:50.783 00:20:50.783 real 0m0.368s 00:20:50.783 user 0m0.195s 00:20:50.783 sys 0m0.049s 00:20:50.783 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 13:40:58 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:20:50.783 13:40:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:50.783 13:40:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.783 13:40:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 ************************************ 00:20:50.783 START TEST rpc_plugins 00:20:50.783 ************************************ 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:20:50.783 { 00:20:50.783 "name": "Malloc1", 00:20:50.783 "aliases": [ 00:20:50.783 "43b37c72-08f4-4cf9-8c0a-cdcf9b605807" 00:20:50.783 ], 00:20:50.783 "product_name": "Malloc disk", 00:20:50.783 "block_size": 4096, 00:20:50.783 "num_blocks": 256, 00:20:50.783 "uuid": "43b37c72-08f4-4cf9-8c0a-cdcf9b605807", 00:20:50.783 "assigned_rate_limits": { 00:20:50.783 "rw_ios_per_sec": 0, 00:20:50.783 "rw_mbytes_per_sec": 0, 00:20:50.783 "r_mbytes_per_sec": 0, 00:20:50.783 "w_mbytes_per_sec": 0 00:20:50.783 }, 00:20:50.783 "claimed": false, 00:20:50.783 "zoned": false, 00:20:50.783 "supported_io_types": { 00:20:50.783 "read": true, 00:20:50.783 "write": true, 00:20:50.783 "unmap": true, 00:20:50.783 "flush": true, 00:20:50.783 "reset": true, 00:20:50.783 "nvme_admin": false, 00:20:50.783 "nvme_io": false, 00:20:50.783 "nvme_io_md": false, 00:20:50.783 "write_zeroes": true, 00:20:50.783 "zcopy": true, 00:20:50.783 "get_zone_info": false, 00:20:50.783 "zone_management": false, 00:20:50.783 "zone_append": false, 00:20:50.783 "compare": false, 00:20:50.783 "compare_and_write": false, 00:20:50.783 "abort": true, 00:20:50.783 "seek_hole": false, 00:20:50.783 "seek_data": false, 00:20:50.783 "copy": true, 00:20:50.783 "nvme_iov_md": false 00:20:50.783 }, 00:20:50.783 "memory_domains": [ 00:20:50.783 { 00:20:50.783 "dma_device_id": "system", 00:20:50.783 "dma_device_type": 1 00:20:50.783 }, 00:20:50.783 { 00:20:50.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:50.783 "dma_device_type": 2 00:20:50.783 } 00:20:50.783 ], 00:20:50.783 "driver_specific": {} 00:20:50.783 } 00:20:50.783 ]' 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:20:50.783 13:40:58 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:20:50.783 00:20:50.783 real 0m0.162s 00:20:50.783 user 0m0.093s 00:20:50.783 sys 0m0.025s 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.783 13:40:58 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 ************************************ 00:20:50.783 END TEST rpc_plugins 00:20:50.783 ************************************ 00:20:51.041 13:40:58 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:20:51.041 13:40:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.041 13:40:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.041 13:40:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:51.041 ************************************ 00:20:51.041 START TEST rpc_trace_cmd_test 00:20:51.041 ************************************ 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:20:51.041 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58077", 00:20:51.041 "tpoint_group_mask": "0x8", 00:20:51.041 "iscsi_conn": { 00:20:51.041 "mask": "0x2", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "scsi": { 00:20:51.041 "mask": "0x4", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "bdev": { 00:20:51.041 "mask": "0x8", 00:20:51.041 "tpoint_mask": "0xffffffffffffffff" 00:20:51.041 }, 00:20:51.041 "nvmf_rdma": { 00:20:51.041 "mask": "0x10", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "nvmf_tcp": { 00:20:51.041 "mask": "0x20", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "ftl": { 00:20:51.041 "mask": "0x40", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "blobfs": { 00:20:51.041 "mask": "0x80", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "dsa": { 00:20:51.041 "mask": "0x200", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "thread": { 00:20:51.041 "mask": "0x400", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "nvme_pcie": { 00:20:51.041 "mask": "0x800", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "iaa": { 00:20:51.041 "mask": "0x1000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "nvme_tcp": { 00:20:51.041 "mask": "0x2000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "bdev_nvme": { 00:20:51.041 "mask": "0x4000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "sock": { 00:20:51.041 "mask": "0x8000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "blob": { 00:20:51.041 "mask": "0x10000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "bdev_raid": { 00:20:51.041 "mask": "0x20000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 }, 00:20:51.041 "scheduler": { 00:20:51.041 "mask": "0x40000", 00:20:51.041 "tpoint_mask": "0x0" 00:20:51.041 } 00:20:51.041 }' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:20:51.041 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:20:51.300 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:20:51.300 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:20:51.300 13:40:58 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:20:51.300 00:20:51.300 real 0m0.253s 00:20:51.300 user 0m0.204s 00:20:51.300 sys 0m0.039s 00:20:51.300 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.300 13:40:58 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 ************************************ 00:20:51.300 END TEST rpc_trace_cmd_test 00:20:51.300 ************************************ 00:20:51.300 13:40:58 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:20:51.300 13:40:58 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:20:51.300 13:40:58 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:20:51.300 13:40:58 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:51.300 13:40:58 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.300 13:40:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 ************************************ 00:20:51.300 START TEST rpc_daemon_integrity 00:20:51.300 ************************************ 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:20:51.300 { 00:20:51.300 "name": "Malloc2", 00:20:51.300 "aliases": [ 00:20:51.300 "28003b66-bd70-4138-af7d-454044ed916e" 00:20:51.300 ], 00:20:51.300 "product_name": "Malloc disk", 00:20:51.300 "block_size": 512, 00:20:51.300 "num_blocks": 16384, 00:20:51.300 "uuid": "28003b66-bd70-4138-af7d-454044ed916e", 00:20:51.300 "assigned_rate_limits": { 00:20:51.300 "rw_ios_per_sec": 0, 00:20:51.300 "rw_mbytes_per_sec": 0, 00:20:51.300 "r_mbytes_per_sec": 0, 00:20:51.300 "w_mbytes_per_sec": 0 00:20:51.300 }, 00:20:51.300 "claimed": false, 00:20:51.300 "zoned": false, 00:20:51.300 "supported_io_types": { 00:20:51.300 "read": true, 00:20:51.300 "write": true, 00:20:51.300 "unmap": true, 00:20:51.300 "flush": true, 00:20:51.300 "reset": true, 00:20:51.300 "nvme_admin": false, 00:20:51.300 "nvme_io": false, 00:20:51.300 "nvme_io_md": false, 00:20:51.300 "write_zeroes": true, 00:20:51.300 "zcopy": true, 00:20:51.300 "get_zone_info": false, 00:20:51.300 "zone_management": false, 00:20:51.300 "zone_append": false, 00:20:51.300 "compare": false, 00:20:51.300 "compare_and_write": false, 00:20:51.300 "abort": true, 00:20:51.300 "seek_hole": false, 00:20:51.300 "seek_data": false, 00:20:51.300 "copy": true, 00:20:51.300 "nvme_iov_md": false 00:20:51.300 }, 00:20:51.300 "memory_domains": [ 00:20:51.300 { 00:20:51.300 "dma_device_id": "system", 00:20:51.300 "dma_device_type": 1 00:20:51.300 }, 00:20:51.300 { 00:20:51.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.300 "dma_device_type": 2 00:20:51.300 } 00:20:51.300 ], 00:20:51.300 "driver_specific": {} 00:20:51.300 } 00:20:51.300 ]' 00:20:51.300 13:40:58 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.559 [2024-11-20 13:40:59.039758] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:20:51.559 [2024-11-20 13:40:59.039846] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.559 [2024-11-20 13:40:59.039876] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:20:51.559 [2024-11-20 13:40:59.039890] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.559 [2024-11-20 13:40:59.042756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.559 [2024-11-20 13:40:59.042814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:20:51.559 Passthru0 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:20:51.559 { 00:20:51.559 "name": "Malloc2", 00:20:51.559 "aliases": [ 00:20:51.559 "28003b66-bd70-4138-af7d-454044ed916e" 00:20:51.559 ], 00:20:51.559 "product_name": "Malloc disk", 00:20:51.559 "block_size": 512, 00:20:51.559 "num_blocks": 16384, 00:20:51.559 "uuid": "28003b66-bd70-4138-af7d-454044ed916e", 00:20:51.559 "assigned_rate_limits": { 00:20:51.559 "rw_ios_per_sec": 0, 00:20:51.559 "rw_mbytes_per_sec": 0, 00:20:51.559 "r_mbytes_per_sec": 0, 00:20:51.559 "w_mbytes_per_sec": 0 00:20:51.559 }, 00:20:51.559 "claimed": true, 00:20:51.559 "claim_type": "exclusive_write", 00:20:51.559 "zoned": false, 00:20:51.559 "supported_io_types": { 00:20:51.559 "read": true, 00:20:51.559 "write": true, 00:20:51.559 "unmap": true, 00:20:51.559 "flush": true, 00:20:51.559 "reset": true, 00:20:51.559 "nvme_admin": false, 00:20:51.559 "nvme_io": false, 00:20:51.559 "nvme_io_md": false, 00:20:51.559 "write_zeroes": true, 00:20:51.559 "zcopy": true, 00:20:51.559 "get_zone_info": false, 00:20:51.559 "zone_management": false, 00:20:51.559 "zone_append": false, 00:20:51.559 "compare": false, 00:20:51.559 "compare_and_write": false, 00:20:51.559 "abort": true, 00:20:51.559 "seek_hole": false, 00:20:51.559 "seek_data": false, 00:20:51.559 "copy": true, 00:20:51.559 "nvme_iov_md": false 00:20:51.559 }, 00:20:51.559 "memory_domains": [ 00:20:51.559 { 00:20:51.559 "dma_device_id": "system", 00:20:51.559 "dma_device_type": 1 00:20:51.559 }, 00:20:51.559 { 00:20:51.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.559 "dma_device_type": 2 00:20:51.559 } 00:20:51.559 ], 00:20:51.559 "driver_specific": {} 00:20:51.559 }, 00:20:51.559 { 00:20:51.559 "name": "Passthru0", 00:20:51.559 "aliases": [ 00:20:51.559 "863138e3-b446-58b8-a4a6-9f8a3219e2c5" 00:20:51.559 ], 00:20:51.559 "product_name": "passthru", 00:20:51.559 "block_size": 512, 00:20:51.559 "num_blocks": 16384, 00:20:51.559 "uuid": "863138e3-b446-58b8-a4a6-9f8a3219e2c5", 00:20:51.559 "assigned_rate_limits": { 00:20:51.559 "rw_ios_per_sec": 0, 00:20:51.559 "rw_mbytes_per_sec": 0, 00:20:51.559 "r_mbytes_per_sec": 0, 00:20:51.559 "w_mbytes_per_sec": 0 00:20:51.559 }, 00:20:51.559 "claimed": false, 00:20:51.559 "zoned": false, 00:20:51.559 "supported_io_types": { 00:20:51.559 "read": true, 00:20:51.559 "write": true, 00:20:51.559 "unmap": true, 00:20:51.559 "flush": true, 00:20:51.559 "reset": true, 00:20:51.559 "nvme_admin": false, 00:20:51.559 "nvme_io": false, 00:20:51.559 "nvme_io_md": false, 00:20:51.559 "write_zeroes": true, 00:20:51.559 "zcopy": true, 00:20:51.559 "get_zone_info": false, 00:20:51.559 "zone_management": false, 00:20:51.559 "zone_append": false, 00:20:51.559 "compare": false, 00:20:51.559 "compare_and_write": false, 00:20:51.559 "abort": true, 00:20:51.559 "seek_hole": false, 00:20:51.559 "seek_data": false, 00:20:51.559 "copy": true, 00:20:51.559 "nvme_iov_md": false 00:20:51.559 }, 00:20:51.559 "memory_domains": [ 00:20:51.559 { 00:20:51.559 "dma_device_id": "system", 00:20:51.559 "dma_device_type": 1 00:20:51.559 }, 00:20:51.559 { 00:20:51.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.559 "dma_device_type": 2 00:20:51.559 } 00:20:51.559 ], 00:20:51.559 "driver_specific": { 00:20:51.559 "passthru": { 00:20:51.559 "name": "Passthru0", 00:20:51.559 "base_bdev_name": "Malloc2" 00:20:51.559 } 00:20:51.559 } 00:20:51.559 } 00:20:51.559 ]' 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.559 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:20:51.560 00:20:51.560 real 0m0.366s 00:20:51.560 user 0m0.198s 00:20:51.560 sys 0m0.057s 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:51.560 13:40:59 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:20:51.560 ************************************ 00:20:51.560 END TEST rpc_daemon_integrity 00:20:51.560 ************************************ 00:20:51.818 13:40:59 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:51.818 13:40:59 rpc -- rpc/rpc.sh@84 -- # killprocess 58077 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@954 -- # '[' -z 58077 ']' 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@958 -- # kill -0 58077 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@959 -- # uname 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58077 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:51.818 killing process with pid 58077 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58077' 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@973 -- # kill 58077 00:20:51.818 13:40:59 rpc -- common/autotest_common.sh@978 -- # wait 58077 00:20:55.114 00:20:55.114 real 0m5.955s 00:20:55.114 user 0m6.577s 00:20:55.114 sys 0m0.950s 00:20:55.114 13:41:02 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.114 13:41:02 rpc -- common/autotest_common.sh@10 -- # set +x 00:20:55.114 ************************************ 00:20:55.114 END TEST rpc 00:20:55.114 ************************************ 00:20:55.114 13:41:02 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:55.114 13:41:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.114 13:41:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.114 13:41:02 -- common/autotest_common.sh@10 -- # set +x 00:20:55.114 ************************************ 00:20:55.114 START TEST skip_rpc 00:20:55.114 ************************************ 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:20:55.114 * Looking for test storage... 00:20:55.114 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@345 -- # : 1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.114 13:41:02 skip_rpc -- scripts/common.sh@368 -- # return 0 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.114 --rc genhtml_branch_coverage=1 00:20:55.114 --rc genhtml_function_coverage=1 00:20:55.114 --rc genhtml_legend=1 00:20:55.114 --rc geninfo_all_blocks=1 00:20:55.114 --rc geninfo_unexecuted_blocks=1 00:20:55.114 00:20:55.114 ' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.114 --rc genhtml_branch_coverage=1 00:20:55.114 --rc genhtml_function_coverage=1 00:20:55.114 --rc genhtml_legend=1 00:20:55.114 --rc geninfo_all_blocks=1 00:20:55.114 --rc geninfo_unexecuted_blocks=1 00:20:55.114 00:20:55.114 ' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.114 --rc genhtml_branch_coverage=1 00:20:55.114 --rc genhtml_function_coverage=1 00:20:55.114 --rc genhtml_legend=1 00:20:55.114 --rc geninfo_all_blocks=1 00:20:55.114 --rc geninfo_unexecuted_blocks=1 00:20:55.114 00:20:55.114 ' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:55.114 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.114 --rc genhtml_branch_coverage=1 00:20:55.114 --rc genhtml_function_coverage=1 00:20:55.114 --rc genhtml_legend=1 00:20:55.114 --rc geninfo_all_blocks=1 00:20:55.114 --rc geninfo_unexecuted_blocks=1 00:20:55.114 00:20:55.114 ' 00:20:55.114 13:41:02 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:20:55.114 13:41:02 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:20:55.114 13:41:02 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.114 13:41:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:20:55.114 ************************************ 00:20:55.114 START TEST skip_rpc 00:20:55.114 ************************************ 00:20:55.114 13:41:02 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:20:55.114 13:41:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58316 00:20:55.114 13:41:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:20:55.114 13:41:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:20:55.114 13:41:02 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:20:55.114 [2024-11-20 13:41:02.573327] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:20:55.115 [2024-11-20 13:41:02.574088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:20:55.115 [2024-11-20 13:41:02.756932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.375 [2024-11-20 13:41:02.893453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58316 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 58316 ']' 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 58316 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58316 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:00.651 killing process with pid 58316 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58316' 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 58316 00:21:00.651 13:41:07 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 58316 00:21:03.212 00:21:03.212 real 0m7.902s 00:21:03.212 user 0m7.386s 00:21:03.212 sys 0m0.427s 00:21:03.212 13:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:03.212 13:41:10 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.212 ************************************ 00:21:03.212 END TEST skip_rpc 00:21:03.212 ************************************ 00:21:03.212 13:41:10 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:21:03.212 13:41:10 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:03.212 13:41:10 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:03.212 13:41:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:03.212 ************************************ 00:21:03.212 START TEST skip_rpc_with_json 00:21:03.212 ************************************ 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58421 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58421 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 58421 ']' 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:03.212 13:41:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:03.212 [2024-11-20 13:41:10.548479] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:03.212 [2024-11-20 13:41:10.548631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58421 ] 00:21:03.212 [2024-11-20 13:41:10.714529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:03.212 [2024-11-20 13:41:10.848280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:04.149 [2024-11-20 13:41:11.842303] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:21:04.149 request: 00:21:04.149 { 00:21:04.149 "trtype": "tcp", 00:21:04.149 "method": "nvmf_get_transports", 00:21:04.149 "req_id": 1 00:21:04.149 } 00:21:04.149 Got JSON-RPC error response 00:21:04.149 response: 00:21:04.149 { 00:21:04.149 "code": -19, 00:21:04.149 "message": "No such device" 00:21:04.149 } 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:04.149 [2024-11-20 13:41:11.854420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.149 13:41:11 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:04.410 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.410 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:04.410 { 00:21:04.410 "subsystems": [ 00:21:04.410 { 00:21:04.410 "subsystem": "fsdev", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "fsdev_set_opts", 00:21:04.410 "params": { 00:21:04.410 "fsdev_io_pool_size": 65535, 00:21:04.410 "fsdev_io_cache_size": 256 00:21:04.410 } 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "keyring", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "iobuf", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "iobuf_set_options", 00:21:04.410 "params": { 00:21:04.410 "small_pool_count": 8192, 00:21:04.410 "large_pool_count": 1024, 00:21:04.410 "small_bufsize": 8192, 00:21:04.410 "large_bufsize": 135168, 00:21:04.410 "enable_numa": false 00:21:04.410 } 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "sock", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "sock_set_default_impl", 00:21:04.410 "params": { 00:21:04.410 "impl_name": "posix" 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "sock_impl_set_options", 00:21:04.410 "params": { 00:21:04.410 "impl_name": "ssl", 00:21:04.410 "recv_buf_size": 4096, 00:21:04.410 "send_buf_size": 4096, 00:21:04.410 "enable_recv_pipe": true, 00:21:04.410 "enable_quickack": false, 00:21:04.410 "enable_placement_id": 0, 00:21:04.410 "enable_zerocopy_send_server": true, 00:21:04.410 "enable_zerocopy_send_client": false, 00:21:04.410 "zerocopy_threshold": 0, 00:21:04.410 "tls_version": 0, 00:21:04.410 "enable_ktls": false 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "sock_impl_set_options", 00:21:04.410 "params": { 00:21:04.410 "impl_name": "posix", 00:21:04.410 "recv_buf_size": 2097152, 00:21:04.410 "send_buf_size": 2097152, 00:21:04.410 "enable_recv_pipe": true, 00:21:04.410 "enable_quickack": false, 00:21:04.410 "enable_placement_id": 0, 00:21:04.410 "enable_zerocopy_send_server": true, 00:21:04.410 "enable_zerocopy_send_client": false, 00:21:04.410 "zerocopy_threshold": 0, 00:21:04.410 "tls_version": 0, 00:21:04.410 "enable_ktls": false 00:21:04.410 } 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "vmd", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "accel", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "accel_set_options", 00:21:04.410 "params": { 00:21:04.410 "small_cache_size": 128, 00:21:04.410 "large_cache_size": 16, 00:21:04.410 "task_count": 2048, 00:21:04.410 "sequence_count": 2048, 00:21:04.410 "buf_count": 2048 00:21:04.410 } 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "bdev", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "bdev_set_options", 00:21:04.410 "params": { 00:21:04.410 "bdev_io_pool_size": 65535, 00:21:04.410 "bdev_io_cache_size": 256, 00:21:04.410 "bdev_auto_examine": true, 00:21:04.410 "iobuf_small_cache_size": 128, 00:21:04.410 "iobuf_large_cache_size": 16 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "bdev_raid_set_options", 00:21:04.410 "params": { 00:21:04.410 "process_window_size_kb": 1024, 00:21:04.410 "process_max_bandwidth_mb_sec": 0 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "bdev_iscsi_set_options", 00:21:04.410 "params": { 00:21:04.410 "timeout_sec": 30 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "bdev_nvme_set_options", 00:21:04.410 "params": { 00:21:04.410 "action_on_timeout": "none", 00:21:04.410 "timeout_us": 0, 00:21:04.410 "timeout_admin_us": 0, 00:21:04.410 "keep_alive_timeout_ms": 10000, 00:21:04.410 "arbitration_burst": 0, 00:21:04.410 "low_priority_weight": 0, 00:21:04.410 "medium_priority_weight": 0, 00:21:04.410 "high_priority_weight": 0, 00:21:04.410 "nvme_adminq_poll_period_us": 10000, 00:21:04.410 "nvme_ioq_poll_period_us": 0, 00:21:04.410 "io_queue_requests": 0, 00:21:04.410 "delay_cmd_submit": true, 00:21:04.410 "transport_retry_count": 4, 00:21:04.410 "bdev_retry_count": 3, 00:21:04.410 "transport_ack_timeout": 0, 00:21:04.410 "ctrlr_loss_timeout_sec": 0, 00:21:04.410 "reconnect_delay_sec": 0, 00:21:04.410 "fast_io_fail_timeout_sec": 0, 00:21:04.410 "disable_auto_failback": false, 00:21:04.410 "generate_uuids": false, 00:21:04.410 "transport_tos": 0, 00:21:04.410 "nvme_error_stat": false, 00:21:04.410 "rdma_srq_size": 0, 00:21:04.410 "io_path_stat": false, 00:21:04.410 "allow_accel_sequence": false, 00:21:04.410 "rdma_max_cq_size": 0, 00:21:04.410 "rdma_cm_event_timeout_ms": 0, 00:21:04.410 "dhchap_digests": [ 00:21:04.410 "sha256", 00:21:04.410 "sha384", 00:21:04.410 "sha512" 00:21:04.410 ], 00:21:04.410 "dhchap_dhgroups": [ 00:21:04.410 "null", 00:21:04.410 "ffdhe2048", 00:21:04.410 "ffdhe3072", 00:21:04.410 "ffdhe4096", 00:21:04.410 "ffdhe6144", 00:21:04.410 "ffdhe8192" 00:21:04.410 ] 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "bdev_nvme_set_hotplug", 00:21:04.410 "params": { 00:21:04.410 "period_us": 100000, 00:21:04.410 "enable": false 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "bdev_wait_for_examine" 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "scsi", 00:21:04.410 "config": null 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "scheduler", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "framework_set_scheduler", 00:21:04.410 "params": { 00:21:04.410 "name": "static" 00:21:04.410 } 00:21:04.410 } 00:21:04.410 ] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "vhost_scsi", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "vhost_blk", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "ublk", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "nbd", 00:21:04.410 "config": [] 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "subsystem": "nvmf", 00:21:04.410 "config": [ 00:21:04.410 { 00:21:04.410 "method": "nvmf_set_config", 00:21:04.410 "params": { 00:21:04.410 "discovery_filter": "match_any", 00:21:04.410 "admin_cmd_passthru": { 00:21:04.410 "identify_ctrlr": false 00:21:04.410 }, 00:21:04.410 "dhchap_digests": [ 00:21:04.410 "sha256", 00:21:04.410 "sha384", 00:21:04.410 "sha512" 00:21:04.410 ], 00:21:04.410 "dhchap_dhgroups": [ 00:21:04.410 "null", 00:21:04.410 "ffdhe2048", 00:21:04.410 "ffdhe3072", 00:21:04.410 "ffdhe4096", 00:21:04.410 "ffdhe6144", 00:21:04.410 "ffdhe8192" 00:21:04.410 ] 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "nvmf_set_max_subsystems", 00:21:04.410 "params": { 00:21:04.410 "max_subsystems": 1024 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "nvmf_set_crdt", 00:21:04.410 "params": { 00:21:04.410 "crdt1": 0, 00:21:04.410 "crdt2": 0, 00:21:04.410 "crdt3": 0 00:21:04.410 } 00:21:04.410 }, 00:21:04.410 { 00:21:04.410 "method": "nvmf_create_transport", 00:21:04.410 "params": { 00:21:04.410 "trtype": "TCP", 00:21:04.410 "max_queue_depth": 128, 00:21:04.410 "max_io_qpairs_per_ctrlr": 127, 00:21:04.410 "in_capsule_data_size": 4096, 00:21:04.410 "max_io_size": 131072, 00:21:04.410 "io_unit_size": 131072, 00:21:04.410 "max_aq_depth": 128, 00:21:04.410 "num_shared_buffers": 511, 00:21:04.410 "buf_cache_size": 4294967295, 00:21:04.410 "dif_insert_or_strip": false, 00:21:04.410 "zcopy": false, 00:21:04.410 "c2h_success": true, 00:21:04.410 "sock_priority": 0, 00:21:04.410 "abort_timeout_sec": 1, 00:21:04.410 "ack_timeout": 0, 00:21:04.411 "data_wr_pool_size": 0 00:21:04.411 } 00:21:04.411 } 00:21:04.411 ] 00:21:04.411 }, 00:21:04.411 { 00:21:04.411 "subsystem": "iscsi", 00:21:04.411 "config": [ 00:21:04.411 { 00:21:04.411 "method": "iscsi_set_options", 00:21:04.411 "params": { 00:21:04.411 "node_base": "iqn.2016-06.io.spdk", 00:21:04.411 "max_sessions": 128, 00:21:04.411 "max_connections_per_session": 2, 00:21:04.411 "max_queue_depth": 64, 00:21:04.411 "default_time2wait": 2, 00:21:04.411 "default_time2retain": 20, 00:21:04.411 "first_burst_length": 8192, 00:21:04.411 "immediate_data": true, 00:21:04.411 "allow_duplicated_isid": false, 00:21:04.411 "error_recovery_level": 0, 00:21:04.411 "nop_timeout": 60, 00:21:04.411 "nop_in_interval": 30, 00:21:04.411 "disable_chap": false, 00:21:04.411 "require_chap": false, 00:21:04.411 "mutual_chap": false, 00:21:04.411 "chap_group": 0, 00:21:04.411 "max_large_datain_per_connection": 64, 00:21:04.411 "max_r2t_per_connection": 4, 00:21:04.411 "pdu_pool_size": 36864, 00:21:04.411 "immediate_data_pool_size": 16384, 00:21:04.411 "data_out_pool_size": 2048 00:21:04.411 } 00:21:04.411 } 00:21:04.411 ] 00:21:04.411 } 00:21:04.411 ] 00:21:04.411 } 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58421 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58421 ']' 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58421 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58421 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:04.411 killing process with pid 58421 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58421' 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58421 00:21:04.411 13:41:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58421 00:21:07.713 13:41:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58483 00:21:07.713 13:41:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:07.713 13:41:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58483 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 58483 ']' 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 58483 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58483 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.045 killing process with pid 58483 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58483' 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 58483 00:21:13.045 13:41:19 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 58483 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:21:14.952 00:21:14.952 real 0m12.103s 00:21:14.952 user 0m11.554s 00:21:14.952 sys 0m0.926s 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:21:14.952 ************************************ 00:21:14.952 END TEST skip_rpc_with_json 00:21:14.952 ************************************ 00:21:14.952 13:41:22 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:21:14.952 13:41:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:14.952 13:41:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:14.952 13:41:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:14.952 ************************************ 00:21:14.952 START TEST skip_rpc_with_delay 00:21:14.952 ************************************ 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.952 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.953 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.953 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.953 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:14.953 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:21:14.953 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:21:15.212 [2024-11-20 13:41:22.695770] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.212 00:21:15.212 real 0m0.182s 00:21:15.212 user 0m0.103s 00:21:15.212 sys 0m0.077s 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:15.212 13:41:22 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:21:15.212 ************************************ 00:21:15.212 END TEST skip_rpc_with_delay 00:21:15.212 ************************************ 00:21:15.212 13:41:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:21:15.212 13:41:22 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:21:15.212 13:41:22 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:21:15.212 13:41:22 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:15.212 13:41:22 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:15.212 13:41:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:15.212 ************************************ 00:21:15.212 START TEST exit_on_failed_rpc_init 00:21:15.212 ************************************ 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58616 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58616 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 58616 ']' 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.212 13:41:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:21:15.530 [2024-11-20 13:41:22.944388] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:15.530 [2024-11-20 13:41:22.944516] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58616 ] 00:21:15.530 [2024-11-20 13:41:23.125553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.789 [2024-11-20 13:41:23.263317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:21:16.725 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:21:16.725 [2024-11-20 13:41:24.349684] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:16.725 [2024-11-20 13:41:24.349831] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58640 ] 00:21:16.984 [2024-11-20 13:41:24.527206] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.984 [2024-11-20 13:41:24.667843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:16.984 [2024-11-20 13:41:24.667964] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:21:16.984 [2024-11-20 13:41:24.667979] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:21:16.984 [2024-11-20 13:41:24.668003] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58616 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 58616 ']' 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 58616 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:17.243 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58616 00:21:17.506 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:17.506 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:17.506 killing process with pid 58616 00:21:17.506 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58616' 00:21:17.506 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 58616 00:21:17.506 13:41:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 58616 00:21:20.096 00:21:20.096 real 0m4.862s 00:21:20.096 user 0m5.245s 00:21:20.096 sys 0m0.604s 00:21:20.096 13:41:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.096 13:41:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:21:20.096 ************************************ 00:21:20.096 END TEST exit_on_failed_rpc_init 00:21:20.096 ************************************ 00:21:20.096 13:41:27 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:21:20.096 00:21:20.096 real 0m25.516s 00:21:20.096 user 0m24.491s 00:21:20.096 sys 0m2.324s 00:21:20.096 13:41:27 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.096 13:41:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:20.096 ************************************ 00:21:20.096 END TEST skip_rpc 00:21:20.096 ************************************ 00:21:20.096 13:41:27 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:21:20.096 13:41:27 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.096 13:41:27 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.096 13:41:27 -- common/autotest_common.sh@10 -- # set +x 00:21:20.096 ************************************ 00:21:20.096 START TEST rpc_client 00:21:20.096 ************************************ 00:21:20.096 13:41:27 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:21:20.356 * Looking for test storage... 00:21:20.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:21:20.356 13:41:27 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.356 13:41:27 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.356 13:41:27 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.356 13:41:27 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@345 -- # : 1 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@353 -- # local d=1 00:21:20.356 13:41:27 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@355 -- # echo 1 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@353 -- # local d=2 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@355 -- # echo 2 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.357 13:41:27 rpc_client -- scripts/common.sh@368 -- # return 0 00:21:20.357 13:41:27 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.357 13:41:27 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.357 --rc genhtml_branch_coverage=1 00:21:20.357 --rc genhtml_function_coverage=1 00:21:20.357 --rc genhtml_legend=1 00:21:20.357 --rc geninfo_all_blocks=1 00:21:20.357 --rc geninfo_unexecuted_blocks=1 00:21:20.357 00:21:20.357 ' 00:21:20.357 13:41:27 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.357 --rc genhtml_branch_coverage=1 00:21:20.357 --rc genhtml_function_coverage=1 00:21:20.357 --rc genhtml_legend=1 00:21:20.357 --rc geninfo_all_blocks=1 00:21:20.357 --rc geninfo_unexecuted_blocks=1 00:21:20.357 00:21:20.357 ' 00:21:20.357 13:41:27 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.357 --rc genhtml_branch_coverage=1 00:21:20.357 --rc genhtml_function_coverage=1 00:21:20.357 --rc genhtml_legend=1 00:21:20.357 --rc geninfo_all_blocks=1 00:21:20.357 --rc geninfo_unexecuted_blocks=1 00:21:20.357 00:21:20.357 ' 00:21:20.357 13:41:27 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.357 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.357 --rc genhtml_branch_coverage=1 00:21:20.357 --rc genhtml_function_coverage=1 00:21:20.357 --rc genhtml_legend=1 00:21:20.357 --rc geninfo_all_blocks=1 00:21:20.357 --rc geninfo_unexecuted_blocks=1 00:21:20.357 00:21:20.357 ' 00:21:20.357 13:41:27 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:21:20.357 OK 00:21:20.357 13:41:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:21:20.357 00:21:20.357 real 0m0.262s 00:21:20.357 user 0m0.150s 00:21:20.357 sys 0m0.131s 00:21:20.357 13:41:28 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.357 13:41:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:21:20.357 ************************************ 00:21:20.357 END TEST rpc_client 00:21:20.357 ************************************ 00:21:20.615 13:41:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:21:20.615 13:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.615 13:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.615 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:20.615 ************************************ 00:21:20.615 START TEST json_config 00:21:20.615 ************************************ 00:21:20.615 13:41:28 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:21:20.615 13:41:28 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.615 13:41:28 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.615 13:41:28 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.616 13:41:28 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.616 13:41:28 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.616 13:41:28 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.616 13:41:28 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.616 13:41:28 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.616 13:41:28 json_config -- scripts/common.sh@344 -- # case "$op" in 00:21:20.616 13:41:28 json_config -- scripts/common.sh@345 -- # : 1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.616 13:41:28 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.616 13:41:28 json_config -- scripts/common.sh@365 -- # decimal 1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@353 -- # local d=1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.616 13:41:28 json_config -- scripts/common.sh@355 -- # echo 1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.616 13:41:28 json_config -- scripts/common.sh@366 -- # decimal 2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@353 -- # local d=2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.616 13:41:28 json_config -- scripts/common.sh@355 -- # echo 2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.616 13:41:28 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.616 13:41:28 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.616 13:41:28 json_config -- scripts/common.sh@368 -- # return 0 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.616 --rc genhtml_branch_coverage=1 00:21:20.616 --rc genhtml_function_coverage=1 00:21:20.616 --rc genhtml_legend=1 00:21:20.616 --rc geninfo_all_blocks=1 00:21:20.616 --rc geninfo_unexecuted_blocks=1 00:21:20.616 00:21:20.616 ' 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.616 --rc genhtml_branch_coverage=1 00:21:20.616 --rc genhtml_function_coverage=1 00:21:20.616 --rc genhtml_legend=1 00:21:20.616 --rc geninfo_all_blocks=1 00:21:20.616 --rc geninfo_unexecuted_blocks=1 00:21:20.616 00:21:20.616 ' 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.616 --rc genhtml_branch_coverage=1 00:21:20.616 --rc genhtml_function_coverage=1 00:21:20.616 --rc genhtml_legend=1 00:21:20.616 --rc geninfo_all_blocks=1 00:21:20.616 --rc geninfo_unexecuted_blocks=1 00:21:20.616 00:21:20.616 ' 00:21:20.616 13:41:28 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.616 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.616 --rc genhtml_branch_coverage=1 00:21:20.616 --rc genhtml_function_coverage=1 00:21:20.616 --rc genhtml_legend=1 00:21:20.616 --rc geninfo_all_blocks=1 00:21:20.616 --rc geninfo_unexecuted_blocks=1 00:21:20.616 00:21:20.616 ' 00:21:20.616 13:41:28 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@7 -- # uname -s 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.616 13:41:28 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.878 13:41:28 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.878 13:41:28 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.878 13:41:28 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.878 13:41:28 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.878 13:41:28 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.878 13:41:28 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.878 13:41:28 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.878 13:41:28 json_config -- paths/export.sh@5 -- # export PATH 00:21:20.878 13:41:28 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@51 -- # : 0 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.878 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.878 13:41:28 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:21:20.878 WARNING: No tests are enabled so not running JSON configuration tests 00:21:20.878 13:41:28 json_config -- json_config/json_config.sh@28 -- # exit 0 00:21:20.878 00:21:20.878 real 0m0.225s 00:21:20.878 user 0m0.136s 00:21:20.878 sys 0m0.091s 00:21:20.878 13:41:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:20.878 13:41:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:21:20.878 ************************************ 00:21:20.878 END TEST json_config 00:21:20.878 ************************************ 00:21:20.878 13:41:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:20.878 13:41:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:20.878 13:41:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:20.878 13:41:28 -- common/autotest_common.sh@10 -- # set +x 00:21:20.878 ************************************ 00:21:20.878 START TEST json_config_extra_key 00:21:20.878 ************************************ 00:21:20.878 13:41:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:21:20.878 13:41:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:20.878 13:41:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:21:20.878 13:41:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:20.878 13:41:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:21:20.878 13:41:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:20.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.879 --rc genhtml_branch_coverage=1 00:21:20.879 --rc genhtml_function_coverage=1 00:21:20.879 --rc genhtml_legend=1 00:21:20.879 --rc geninfo_all_blocks=1 00:21:20.879 --rc geninfo_unexecuted_blocks=1 00:21:20.879 00:21:20.879 ' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:20.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.879 --rc genhtml_branch_coverage=1 00:21:20.879 --rc genhtml_function_coverage=1 00:21:20.879 --rc genhtml_legend=1 00:21:20.879 --rc geninfo_all_blocks=1 00:21:20.879 --rc geninfo_unexecuted_blocks=1 00:21:20.879 00:21:20.879 ' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:20.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.879 --rc genhtml_branch_coverage=1 00:21:20.879 --rc genhtml_function_coverage=1 00:21:20.879 --rc genhtml_legend=1 00:21:20.879 --rc geninfo_all_blocks=1 00:21:20.879 --rc geninfo_unexecuted_blocks=1 00:21:20.879 00:21:20.879 ' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:20.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:20.879 --rc genhtml_branch_coverage=1 00:21:20.879 --rc genhtml_function_coverage=1 00:21:20.879 --rc genhtml_legend=1 00:21:20.879 --rc geninfo_all_blocks=1 00:21:20.879 --rc geninfo_unexecuted_blocks=1 00:21:20.879 00:21:20.879 ' 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=79adfa99-5396-4778-86f4-6e24fc6ac5f1 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.879 13:41:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.879 13:41:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.879 13:41:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.879 13:41:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.879 13:41:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:21:20.879 13:41:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:20.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:20.879 13:41:28 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:21:20.879 INFO: launching applications... 00:21:20.879 13:41:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58850 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:21:20.879 Waiting for target to run... 00:21:20.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:21:20.879 13:41:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58850 /var/tmp/spdk_tgt.sock 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58850 ']' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:20.879 13:41:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:21.139 [2024-11-20 13:41:28.686409] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:21.139 [2024-11-20 13:41:28.686633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58850 ] 00:21:21.396 [2024-11-20 13:41:29.083949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:21.655 [2024-11-20 13:41:29.204947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.592 13:41:30 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:22.592 13:41:30 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:21:22.592 00:21:22.592 13:41:30 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:21:22.592 INFO: shutting down applications... 00:21:22.592 13:41:30 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58850 ]] 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58850 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:22.592 13:41:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:22.851 13:41:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:22.851 13:41:30 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:22.851 13:41:30 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:22.851 13:41:30 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:23.429 13:41:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:23.429 13:41:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:23.429 13:41:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:23.429 13:41:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:23.998 13:41:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:23.998 13:41:31 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:23.998 13:41:31 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:23.998 13:41:31 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:24.568 13:41:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:24.568 13:41:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:24.568 13:41:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:24.568 13:41:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:25.142 13:41:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:25.142 13:41:32 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:25.142 13:41:32 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:25.142 13:41:32 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58850 00:21:25.420 SPDK target shutdown done 00:21:25.420 Success 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@43 -- # break 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:21:25.420 13:41:33 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:21:25.420 13:41:33 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:21:25.420 ************************************ 00:21:25.420 END TEST json_config_extra_key 00:21:25.420 ************************************ 00:21:25.420 00:21:25.420 real 0m4.685s 00:21:25.420 user 0m4.500s 00:21:25.420 sys 0m0.553s 00:21:25.420 13:41:33 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:25.420 13:41:33 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:21:25.420 13:41:33 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:25.681 13:41:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:25.681 13:41:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:25.681 13:41:33 -- common/autotest_common.sh@10 -- # set +x 00:21:25.681 ************************************ 00:21:25.681 START TEST alias_rpc 00:21:25.681 ************************************ 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:21:25.681 * Looking for test storage... 00:21:25.681 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@345 -- # : 1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.681 13:41:33 alias_rpc -- scripts/common.sh@368 -- # return 0 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.681 --rc genhtml_branch_coverage=1 00:21:25.681 --rc genhtml_function_coverage=1 00:21:25.681 --rc genhtml_legend=1 00:21:25.681 --rc geninfo_all_blocks=1 00:21:25.681 --rc geninfo_unexecuted_blocks=1 00:21:25.681 00:21:25.681 ' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.681 --rc genhtml_branch_coverage=1 00:21:25.681 --rc genhtml_function_coverage=1 00:21:25.681 --rc genhtml_legend=1 00:21:25.681 --rc geninfo_all_blocks=1 00:21:25.681 --rc geninfo_unexecuted_blocks=1 00:21:25.681 00:21:25.681 ' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.681 --rc genhtml_branch_coverage=1 00:21:25.681 --rc genhtml_function_coverage=1 00:21:25.681 --rc genhtml_legend=1 00:21:25.681 --rc geninfo_all_blocks=1 00:21:25.681 --rc geninfo_unexecuted_blocks=1 00:21:25.681 00:21:25.681 ' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:25.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.681 --rc genhtml_branch_coverage=1 00:21:25.681 --rc genhtml_function_coverage=1 00:21:25.681 --rc genhtml_legend=1 00:21:25.681 --rc geninfo_all_blocks=1 00:21:25.681 --rc geninfo_unexecuted_blocks=1 00:21:25.681 00:21:25.681 ' 00:21:25.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.681 13:41:33 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:21:25.681 13:41:33 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58967 00:21:25.681 13:41:33 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:25.681 13:41:33 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58967 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58967 ']' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.681 13:41:33 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:25.940 [2024-11-20 13:41:33.484312] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:25.940 [2024-11-20 13:41:33.484556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58967 ] 00:21:25.940 [2024-11-20 13:41:33.643597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.197 [2024-11-20 13:41:33.767807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.141 13:41:34 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:27.141 13:41:34 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:21:27.141 13:41:34 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:21:27.400 13:41:34 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58967 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58967 ']' 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58967 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58967 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58967' 00:21:27.400 killing process with pid 58967 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@973 -- # kill 58967 00:21:27.400 13:41:34 alias_rpc -- common/autotest_common.sh@978 -- # wait 58967 00:21:29.934 00:21:29.934 real 0m4.461s 00:21:29.934 user 0m4.539s 00:21:29.934 sys 0m0.602s 00:21:29.934 13:41:37 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.934 13:41:37 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:21:29.934 ************************************ 00:21:29.934 END TEST alias_rpc 00:21:29.934 ************************************ 00:21:30.194 13:41:37 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:21:30.194 13:41:37 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:21:30.194 13:41:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:30.194 13:41:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.194 13:41:37 -- common/autotest_common.sh@10 -- # set +x 00:21:30.194 ************************************ 00:21:30.194 START TEST spdkcli_tcp 00:21:30.194 ************************************ 00:21:30.194 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:21:30.194 * Looking for test storage... 00:21:30.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:30.194 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:30.194 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:21:30.194 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:30.194 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:21:30.194 13:41:37 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:21:30.454 13:41:37 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:30.454 13:41:37 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:30.454 13:41:37 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:30.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.454 --rc genhtml_branch_coverage=1 00:21:30.454 --rc genhtml_function_coverage=1 00:21:30.454 --rc genhtml_legend=1 00:21:30.454 --rc geninfo_all_blocks=1 00:21:30.454 --rc geninfo_unexecuted_blocks=1 00:21:30.454 00:21:30.454 ' 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:30.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.454 --rc genhtml_branch_coverage=1 00:21:30.454 --rc genhtml_function_coverage=1 00:21:30.454 --rc genhtml_legend=1 00:21:30.454 --rc geninfo_all_blocks=1 00:21:30.454 --rc geninfo_unexecuted_blocks=1 00:21:30.454 00:21:30.454 ' 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:30.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.454 --rc genhtml_branch_coverage=1 00:21:30.454 --rc genhtml_function_coverage=1 00:21:30.454 --rc genhtml_legend=1 00:21:30.454 --rc geninfo_all_blocks=1 00:21:30.454 --rc geninfo_unexecuted_blocks=1 00:21:30.454 00:21:30.454 ' 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:30.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:30.454 --rc genhtml_branch_coverage=1 00:21:30.454 --rc genhtml_function_coverage=1 00:21:30.454 --rc genhtml_legend=1 00:21:30.454 --rc geninfo_all_blocks=1 00:21:30.454 --rc geninfo_unexecuted_blocks=1 00:21:30.454 00:21:30.454 ' 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59074 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:30.454 13:41:37 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59074 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59074 ']' 00:21:30.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.454 13:41:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:30.454 [2024-11-20 13:41:38.033020] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:30.454 [2024-11-20 13:41:38.033241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59074 ] 00:21:30.713 [2024-11-20 13:41:38.213793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:30.713 [2024-11-20 13:41:38.346128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.713 [2024-11-20 13:41:38.346169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.683 13:41:39 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.683 13:41:39 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:21:31.683 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59091 00:21:31.683 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:21:31.683 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:21:31.943 [ 00:21:31.943 "bdev_malloc_delete", 00:21:31.943 "bdev_malloc_create", 00:21:31.943 "bdev_null_resize", 00:21:31.943 "bdev_null_delete", 00:21:31.943 "bdev_null_create", 00:21:31.943 "bdev_nvme_cuse_unregister", 00:21:31.943 "bdev_nvme_cuse_register", 00:21:31.943 "bdev_opal_new_user", 00:21:31.943 "bdev_opal_set_lock_state", 00:21:31.943 "bdev_opal_delete", 00:21:31.943 "bdev_opal_get_info", 00:21:31.943 "bdev_opal_create", 00:21:31.944 "bdev_nvme_opal_revert", 00:21:31.944 "bdev_nvme_opal_init", 00:21:31.944 "bdev_nvme_send_cmd", 00:21:31.944 "bdev_nvme_set_keys", 00:21:31.944 "bdev_nvme_get_path_iostat", 00:21:31.944 "bdev_nvme_get_mdns_discovery_info", 00:21:31.944 "bdev_nvme_stop_mdns_discovery", 00:21:31.944 "bdev_nvme_start_mdns_discovery", 00:21:31.944 "bdev_nvme_set_multipath_policy", 00:21:31.944 "bdev_nvme_set_preferred_path", 00:21:31.944 "bdev_nvme_get_io_paths", 00:21:31.944 "bdev_nvme_remove_error_injection", 00:21:31.944 "bdev_nvme_add_error_injection", 00:21:31.944 "bdev_nvme_get_discovery_info", 00:21:31.944 "bdev_nvme_stop_discovery", 00:21:31.944 "bdev_nvme_start_discovery", 00:21:31.944 "bdev_nvme_get_controller_health_info", 00:21:31.944 "bdev_nvme_disable_controller", 00:21:31.944 "bdev_nvme_enable_controller", 00:21:31.944 "bdev_nvme_reset_controller", 00:21:31.944 "bdev_nvme_get_transport_statistics", 00:21:31.944 "bdev_nvme_apply_firmware", 00:21:31.944 "bdev_nvme_detach_controller", 00:21:31.944 "bdev_nvme_get_controllers", 00:21:31.944 "bdev_nvme_attach_controller", 00:21:31.944 "bdev_nvme_set_hotplug", 00:21:31.944 "bdev_nvme_set_options", 00:21:31.944 "bdev_passthru_delete", 00:21:31.944 "bdev_passthru_create", 00:21:31.944 "bdev_lvol_set_parent_bdev", 00:21:31.944 "bdev_lvol_set_parent", 00:21:31.944 "bdev_lvol_check_shallow_copy", 00:21:31.944 "bdev_lvol_start_shallow_copy", 00:21:31.944 "bdev_lvol_grow_lvstore", 00:21:31.944 "bdev_lvol_get_lvols", 00:21:31.944 "bdev_lvol_get_lvstores", 00:21:31.944 "bdev_lvol_delete", 00:21:31.944 "bdev_lvol_set_read_only", 00:21:31.944 "bdev_lvol_resize", 00:21:31.944 "bdev_lvol_decouple_parent", 00:21:31.944 "bdev_lvol_inflate", 00:21:31.944 "bdev_lvol_rename", 00:21:31.944 "bdev_lvol_clone_bdev", 00:21:31.944 "bdev_lvol_clone", 00:21:31.944 "bdev_lvol_snapshot", 00:21:31.944 "bdev_lvol_create", 00:21:31.944 "bdev_lvol_delete_lvstore", 00:21:31.944 "bdev_lvol_rename_lvstore", 00:21:31.944 "bdev_lvol_create_lvstore", 00:21:31.944 "bdev_raid_set_options", 00:21:31.944 "bdev_raid_remove_base_bdev", 00:21:31.944 "bdev_raid_add_base_bdev", 00:21:31.944 "bdev_raid_delete", 00:21:31.944 "bdev_raid_create", 00:21:31.944 "bdev_raid_get_bdevs", 00:21:31.944 "bdev_error_inject_error", 00:21:31.944 "bdev_error_delete", 00:21:31.944 "bdev_error_create", 00:21:31.944 "bdev_split_delete", 00:21:31.944 "bdev_split_create", 00:21:31.944 "bdev_delay_delete", 00:21:31.944 "bdev_delay_create", 00:21:31.944 "bdev_delay_update_latency", 00:21:31.944 "bdev_zone_block_delete", 00:21:31.944 "bdev_zone_block_create", 00:21:31.944 "blobfs_create", 00:21:31.944 "blobfs_detect", 00:21:31.944 "blobfs_set_cache_size", 00:21:31.944 "bdev_xnvme_delete", 00:21:31.944 "bdev_xnvme_create", 00:21:31.944 "bdev_aio_delete", 00:21:31.944 "bdev_aio_rescan", 00:21:31.944 "bdev_aio_create", 00:21:31.944 "bdev_ftl_set_property", 00:21:31.944 "bdev_ftl_get_properties", 00:21:31.944 "bdev_ftl_get_stats", 00:21:31.944 "bdev_ftl_unmap", 00:21:31.944 "bdev_ftl_unload", 00:21:31.944 "bdev_ftl_delete", 00:21:31.944 "bdev_ftl_load", 00:21:31.944 "bdev_ftl_create", 00:21:31.944 "bdev_virtio_attach_controller", 00:21:31.944 "bdev_virtio_scsi_get_devices", 00:21:31.944 "bdev_virtio_detach_controller", 00:21:31.944 "bdev_virtio_blk_set_hotplug", 00:21:31.944 "bdev_iscsi_delete", 00:21:31.944 "bdev_iscsi_create", 00:21:31.944 "bdev_iscsi_set_options", 00:21:31.944 "accel_error_inject_error", 00:21:31.944 "ioat_scan_accel_module", 00:21:31.944 "dsa_scan_accel_module", 00:21:31.944 "iaa_scan_accel_module", 00:21:31.944 "keyring_file_remove_key", 00:21:31.944 "keyring_file_add_key", 00:21:31.944 "keyring_linux_set_options", 00:21:31.944 "fsdev_aio_delete", 00:21:31.944 "fsdev_aio_create", 00:21:31.944 "iscsi_get_histogram", 00:21:31.944 "iscsi_enable_histogram", 00:21:31.944 "iscsi_set_options", 00:21:31.944 "iscsi_get_auth_groups", 00:21:31.944 "iscsi_auth_group_remove_secret", 00:21:31.944 "iscsi_auth_group_add_secret", 00:21:31.944 "iscsi_delete_auth_group", 00:21:31.944 "iscsi_create_auth_group", 00:21:31.944 "iscsi_set_discovery_auth", 00:21:31.944 "iscsi_get_options", 00:21:31.944 "iscsi_target_node_request_logout", 00:21:31.944 "iscsi_target_node_set_redirect", 00:21:31.944 "iscsi_target_node_set_auth", 00:21:31.944 "iscsi_target_node_add_lun", 00:21:31.944 "iscsi_get_stats", 00:21:31.944 "iscsi_get_connections", 00:21:31.944 "iscsi_portal_group_set_auth", 00:21:31.944 "iscsi_start_portal_group", 00:21:31.944 "iscsi_delete_portal_group", 00:21:31.944 "iscsi_create_portal_group", 00:21:31.944 "iscsi_get_portal_groups", 00:21:31.944 "iscsi_delete_target_node", 00:21:31.944 "iscsi_target_node_remove_pg_ig_maps", 00:21:31.944 "iscsi_target_node_add_pg_ig_maps", 00:21:31.944 "iscsi_create_target_node", 00:21:31.944 "iscsi_get_target_nodes", 00:21:31.944 "iscsi_delete_initiator_group", 00:21:31.944 "iscsi_initiator_group_remove_initiators", 00:21:31.944 "iscsi_initiator_group_add_initiators", 00:21:31.944 "iscsi_create_initiator_group", 00:21:31.944 "iscsi_get_initiator_groups", 00:21:31.944 "nvmf_set_crdt", 00:21:31.944 "nvmf_set_config", 00:21:31.944 "nvmf_set_max_subsystems", 00:21:31.944 "nvmf_stop_mdns_prr", 00:21:31.944 "nvmf_publish_mdns_prr", 00:21:31.944 "nvmf_subsystem_get_listeners", 00:21:31.944 "nvmf_subsystem_get_qpairs", 00:21:31.944 "nvmf_subsystem_get_controllers", 00:21:31.944 "nvmf_get_stats", 00:21:31.944 "nvmf_get_transports", 00:21:31.944 "nvmf_create_transport", 00:21:31.944 "nvmf_get_targets", 00:21:31.944 "nvmf_delete_target", 00:21:31.944 "nvmf_create_target", 00:21:31.944 "nvmf_subsystem_allow_any_host", 00:21:31.944 "nvmf_subsystem_set_keys", 00:21:31.944 "nvmf_subsystem_remove_host", 00:21:31.944 "nvmf_subsystem_add_host", 00:21:31.944 "nvmf_ns_remove_host", 00:21:31.944 "nvmf_ns_add_host", 00:21:31.944 "nvmf_subsystem_remove_ns", 00:21:31.944 "nvmf_subsystem_set_ns_ana_group", 00:21:31.944 "nvmf_subsystem_add_ns", 00:21:31.944 "nvmf_subsystem_listener_set_ana_state", 00:21:31.944 "nvmf_discovery_get_referrals", 00:21:31.944 "nvmf_discovery_remove_referral", 00:21:31.944 "nvmf_discovery_add_referral", 00:21:31.944 "nvmf_subsystem_remove_listener", 00:21:31.944 "nvmf_subsystem_add_listener", 00:21:31.944 "nvmf_delete_subsystem", 00:21:31.944 "nvmf_create_subsystem", 00:21:31.944 "nvmf_get_subsystems", 00:21:31.944 "env_dpdk_get_mem_stats", 00:21:31.944 "nbd_get_disks", 00:21:31.944 "nbd_stop_disk", 00:21:31.944 "nbd_start_disk", 00:21:31.944 "ublk_recover_disk", 00:21:31.944 "ublk_get_disks", 00:21:31.944 "ublk_stop_disk", 00:21:31.944 "ublk_start_disk", 00:21:31.944 "ublk_destroy_target", 00:21:31.944 "ublk_create_target", 00:21:31.944 "virtio_blk_create_transport", 00:21:31.944 "virtio_blk_get_transports", 00:21:31.944 "vhost_controller_set_coalescing", 00:21:31.944 "vhost_get_controllers", 00:21:31.944 "vhost_delete_controller", 00:21:31.944 "vhost_create_blk_controller", 00:21:31.944 "vhost_scsi_controller_remove_target", 00:21:31.944 "vhost_scsi_controller_add_target", 00:21:31.944 "vhost_start_scsi_controller", 00:21:31.944 "vhost_create_scsi_controller", 00:21:31.944 "thread_set_cpumask", 00:21:31.944 "scheduler_set_options", 00:21:31.944 "framework_get_governor", 00:21:31.944 "framework_get_scheduler", 00:21:31.944 "framework_set_scheduler", 00:21:31.944 "framework_get_reactors", 00:21:31.944 "thread_get_io_channels", 00:21:31.944 "thread_get_pollers", 00:21:31.944 "thread_get_stats", 00:21:31.944 "framework_monitor_context_switch", 00:21:31.944 "spdk_kill_instance", 00:21:31.944 "log_enable_timestamps", 00:21:31.944 "log_get_flags", 00:21:31.944 "log_clear_flag", 00:21:31.944 "log_set_flag", 00:21:31.944 "log_get_level", 00:21:31.944 "log_set_level", 00:21:31.944 "log_get_print_level", 00:21:31.944 "log_set_print_level", 00:21:31.944 "framework_enable_cpumask_locks", 00:21:31.944 "framework_disable_cpumask_locks", 00:21:31.944 "framework_wait_init", 00:21:31.944 "framework_start_init", 00:21:31.944 "scsi_get_devices", 00:21:31.944 "bdev_get_histogram", 00:21:31.944 "bdev_enable_histogram", 00:21:31.944 "bdev_set_qos_limit", 00:21:31.944 "bdev_set_qd_sampling_period", 00:21:31.944 "bdev_get_bdevs", 00:21:31.944 "bdev_reset_iostat", 00:21:31.945 "bdev_get_iostat", 00:21:31.945 "bdev_examine", 00:21:31.945 "bdev_wait_for_examine", 00:21:31.945 "bdev_set_options", 00:21:31.945 "accel_get_stats", 00:21:31.945 "accel_set_options", 00:21:31.945 "accel_set_driver", 00:21:31.945 "accel_crypto_key_destroy", 00:21:31.945 "accel_crypto_keys_get", 00:21:31.945 "accel_crypto_key_create", 00:21:31.945 "accel_assign_opc", 00:21:31.945 "accel_get_module_info", 00:21:31.945 "accel_get_opc_assignments", 00:21:31.945 "vmd_rescan", 00:21:31.945 "vmd_remove_device", 00:21:31.945 "vmd_enable", 00:21:31.945 "sock_get_default_impl", 00:21:31.945 "sock_set_default_impl", 00:21:31.945 "sock_impl_set_options", 00:21:31.945 "sock_impl_get_options", 00:21:31.945 "iobuf_get_stats", 00:21:31.945 "iobuf_set_options", 00:21:31.945 "keyring_get_keys", 00:21:31.945 "framework_get_pci_devices", 00:21:31.945 "framework_get_config", 00:21:31.945 "framework_get_subsystems", 00:21:31.945 "fsdev_set_opts", 00:21:31.945 "fsdev_get_opts", 00:21:31.945 "trace_get_info", 00:21:31.945 "trace_get_tpoint_group_mask", 00:21:31.945 "trace_disable_tpoint_group", 00:21:31.945 "trace_enable_tpoint_group", 00:21:31.945 "trace_clear_tpoint_mask", 00:21:31.945 "trace_set_tpoint_mask", 00:21:31.945 "notify_get_notifications", 00:21:31.945 "notify_get_types", 00:21:31.945 "spdk_get_version", 00:21:31.945 "rpc_get_methods" 00:21:31.945 ] 00:21:31.945 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:31.945 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:21:31.945 13:41:39 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59074 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59074 ']' 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59074 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59074 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59074' 00:21:31.945 killing process with pid 59074 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59074 00:21:31.945 13:41:39 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59074 00:21:35.237 00:21:35.237 real 0m4.576s 00:21:35.237 user 0m8.197s 00:21:35.237 sys 0m0.663s 00:21:35.237 13:41:42 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:35.237 13:41:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:35.237 ************************************ 00:21:35.237 END TEST spdkcli_tcp 00:21:35.237 ************************************ 00:21:35.237 13:41:42 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:35.237 13:41:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:35.237 13:41:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:35.237 13:41:42 -- common/autotest_common.sh@10 -- # set +x 00:21:35.237 ************************************ 00:21:35.237 START TEST dpdk_mem_utility 00:21:35.237 ************************************ 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:21:35.237 * Looking for test storage... 00:21:35.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:35.237 13:41:42 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.237 --rc genhtml_branch_coverage=1 00:21:35.237 --rc genhtml_function_coverage=1 00:21:35.237 --rc genhtml_legend=1 00:21:35.237 --rc geninfo_all_blocks=1 00:21:35.237 --rc geninfo_unexecuted_blocks=1 00:21:35.237 00:21:35.237 ' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.237 --rc genhtml_branch_coverage=1 00:21:35.237 --rc genhtml_function_coverage=1 00:21:35.237 --rc genhtml_legend=1 00:21:35.237 --rc geninfo_all_blocks=1 00:21:35.237 --rc geninfo_unexecuted_blocks=1 00:21:35.237 00:21:35.237 ' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.237 --rc genhtml_branch_coverage=1 00:21:35.237 --rc genhtml_function_coverage=1 00:21:35.237 --rc genhtml_legend=1 00:21:35.237 --rc geninfo_all_blocks=1 00:21:35.237 --rc geninfo_unexecuted_blocks=1 00:21:35.237 00:21:35.237 ' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:35.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:35.237 --rc genhtml_branch_coverage=1 00:21:35.237 --rc genhtml_function_coverage=1 00:21:35.237 --rc genhtml_legend=1 00:21:35.237 --rc geninfo_all_blocks=1 00:21:35.237 --rc geninfo_unexecuted_blocks=1 00:21:35.237 00:21:35.237 ' 00:21:35.237 13:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:35.237 13:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59201 00:21:35.237 13:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:35.237 13:41:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59201 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59201 ']' 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.237 13:41:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:35.237 [2024-11-20 13:41:42.621302] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:35.237 [2024-11-20 13:41:42.621548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59201 ] 00:21:35.237 [2024-11-20 13:41:42.800330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.237 [2024-11-20 13:41:42.933083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.621 13:41:43 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.621 13:41:43 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:21:36.621 13:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:21:36.621 13:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:21:36.621 13:41:43 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.621 13:41:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:36.621 { 00:21:36.621 "filename": "/tmp/spdk_mem_dump.txt" 00:21:36.621 } 00:21:36.621 13:41:43 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.621 13:41:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:21:36.621 DPDK memory size 824.000000 MiB in 1 heap(s) 00:21:36.621 1 heaps totaling size 824.000000 MiB 00:21:36.621 size: 824.000000 MiB heap id: 0 00:21:36.621 end heaps---------- 00:21:36.621 9 mempools totaling size 603.782043 MiB 00:21:36.621 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:21:36.621 size: 158.602051 MiB name: PDU_data_out_Pool 00:21:36.621 size: 100.555481 MiB name: bdev_io_59201 00:21:36.621 size: 50.003479 MiB name: msgpool_59201 00:21:36.621 size: 36.509338 MiB name: fsdev_io_59201 00:21:36.621 size: 21.763794 MiB name: PDU_Pool 00:21:36.621 size: 19.513306 MiB name: SCSI_TASK_Pool 00:21:36.621 size: 4.133484 MiB name: evtpool_59201 00:21:36.621 size: 0.026123 MiB name: Session_Pool 00:21:36.621 end mempools------- 00:21:36.621 6 memzones totaling size 4.142822 MiB 00:21:36.621 size: 1.000366 MiB name: RG_ring_0_59201 00:21:36.621 size: 1.000366 MiB name: RG_ring_1_59201 00:21:36.621 size: 1.000366 MiB name: RG_ring_4_59201 00:21:36.621 size: 1.000366 MiB name: RG_ring_5_59201 00:21:36.621 size: 0.125366 MiB name: RG_ring_2_59201 00:21:36.621 size: 0.015991 MiB name: RG_ring_3_59201 00:21:36.621 end memzones------- 00:21:36.621 13:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:21:36.621 heap id: 0 total size: 824.000000 MiB number of busy elements: 319 number of free elements: 18 00:21:36.621 list of free elements. size: 16.780396 MiB 00:21:36.621 element at address: 0x200006400000 with size: 1.995972 MiB 00:21:36.621 element at address: 0x20000a600000 with size: 1.995972 MiB 00:21:36.621 element at address: 0x200003e00000 with size: 1.991028 MiB 00:21:36.621 element at address: 0x200019500040 with size: 0.999939 MiB 00:21:36.621 element at address: 0x200019900040 with size: 0.999939 MiB 00:21:36.621 element at address: 0x200019a00000 with size: 0.999084 MiB 00:21:36.621 element at address: 0x200032600000 with size: 0.994324 MiB 00:21:36.621 element at address: 0x200000400000 with size: 0.992004 MiB 00:21:36.621 element at address: 0x200019200000 with size: 0.959656 MiB 00:21:36.621 element at address: 0x200019d00040 with size: 0.936401 MiB 00:21:36.621 element at address: 0x200000200000 with size: 0.716980 MiB 00:21:36.621 element at address: 0x20001b400000 with size: 0.561707 MiB 00:21:36.621 element at address: 0x200000c00000 with size: 0.489197 MiB 00:21:36.621 element at address: 0x200019600000 with size: 0.487976 MiB 00:21:36.621 element at address: 0x200019e00000 with size: 0.485413 MiB 00:21:36.621 element at address: 0x200012c00000 with size: 0.433472 MiB 00:21:36.621 element at address: 0x200028800000 with size: 0.390442 MiB 00:21:36.621 element at address: 0x200000800000 with size: 0.350891 MiB 00:21:36.621 list of standard malloc elements. size: 199.288696 MiB 00:21:36.621 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:21:36.621 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:21:36.621 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:21:36.621 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:21:36.621 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:21:36.621 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:21:36.621 element at address: 0x200019deff40 with size: 0.062683 MiB 00:21:36.621 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:21:36.621 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:21:36.621 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:21:36.621 element at address: 0x200012bff040 with size: 0.000305 MiB 00:21:36.621 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:21:36.621 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200000cff000 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff180 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff280 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff380 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff480 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff580 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff680 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff780 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff880 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bff980 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200019affc40 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:21:36.622 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:21:36.623 element at address: 0x200028863f40 with size: 0.000244 MiB 00:21:36.623 element at address: 0x200028864040 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886af80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b080 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b180 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b280 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b380 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b480 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b580 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b680 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b780 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b880 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886b980 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886be80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c080 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c180 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c280 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c380 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c480 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c580 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c680 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c780 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c880 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886c980 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d080 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d180 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d280 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d380 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d480 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d580 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d680 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d780 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d880 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886d980 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886da80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886db80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886de80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886df80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e080 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e180 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e280 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e380 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e480 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e580 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e680 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e780 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e880 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886e980 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f080 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f180 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f280 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f380 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f480 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f580 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f680 with size: 0.000244 MiB 00:21:36.623 element at address: 0x20002886f780 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886f880 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886f980 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:21:36.624 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:21:36.624 list of memzone associated elements. size: 607.930908 MiB 00:21:36.624 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:21:36.624 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:21:36.624 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:21:36.624 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:21:36.624 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:21:36.624 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59201_0 00:21:36.624 element at address: 0x200000dff340 with size: 48.003113 MiB 00:21:36.624 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59201_0 00:21:36.624 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:21:36.624 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59201_0 00:21:36.624 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:21:36.624 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:21:36.624 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:21:36.624 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:21:36.624 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:21:36.624 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59201_0 00:21:36.624 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:21:36.624 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59201 00:21:36.624 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:21:36.624 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59201 00:21:36.624 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:21:36.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:21:36.624 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:21:36.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:21:36.624 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:21:36.624 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:21:36.624 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:21:36.624 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:21:36.624 element at address: 0x200000cff100 with size: 1.000549 MiB 00:21:36.624 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59201 00:21:36.624 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:21:36.624 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59201 00:21:36.624 element at address: 0x200019affd40 with size: 1.000549 MiB 00:21:36.624 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59201 00:21:36.624 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:21:36.624 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59201 00:21:36.624 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:21:36.624 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59201 00:21:36.624 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:21:36.624 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59201 00:21:36.624 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:21:36.624 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:21:36.624 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:21:36.624 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:21:36.624 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:21:36.624 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:21:36.624 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:21:36.624 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59201 00:21:36.624 element at address: 0x20000085df80 with size: 0.125549 MiB 00:21:36.624 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59201 00:21:36.624 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:21:36.624 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:21:36.624 element at address: 0x200028864140 with size: 0.023804 MiB 00:21:36.624 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:21:36.624 element at address: 0x200000859d40 with size: 0.016174 MiB 00:21:36.624 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59201 00:21:36.624 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:21:36.624 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:21:36.624 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:21:36.624 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59201 00:21:36.624 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:21:36.624 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59201 00:21:36.624 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:21:36.624 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59201 00:21:36.624 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:21:36.624 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:21:36.624 13:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:21:36.624 13:41:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59201 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59201 ']' 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59201 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59201 00:21:36.624 killing process with pid 59201 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59201' 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59201 00:21:36.624 13:41:44 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59201 00:21:39.158 00:21:39.158 real 0m4.490s 00:21:39.158 user 0m4.446s 00:21:39.158 sys 0m0.599s 00:21:39.158 13:41:46 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.158 13:41:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:21:39.158 ************************************ 00:21:39.158 END TEST dpdk_mem_utility 00:21:39.158 ************************************ 00:21:39.158 13:41:46 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:39.158 13:41:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:39.158 13:41:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.158 13:41:46 -- common/autotest_common.sh@10 -- # set +x 00:21:39.158 ************************************ 00:21:39.158 START TEST event 00:21:39.158 ************************************ 00:21:39.158 13:41:46 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:21:39.417 * Looking for test storage... 00:21:39.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:21:39.417 13:41:46 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:39.417 13:41:46 event -- common/autotest_common.sh@1693 -- # lcov --version 00:21:39.417 13:41:46 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:39.417 13:41:47 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:39.417 13:41:47 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.417 13:41:47 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.417 13:41:47 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.417 13:41:47 event -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.417 13:41:47 event -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.417 13:41:47 event -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.417 13:41:47 event -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.417 13:41:47 event -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.417 13:41:47 event -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.417 13:41:47 event -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.417 13:41:47 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.417 13:41:47 event -- scripts/common.sh@344 -- # case "$op" in 00:21:39.417 13:41:47 event -- scripts/common.sh@345 -- # : 1 00:21:39.417 13:41:47 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.417 13:41:47 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.417 13:41:47 event -- scripts/common.sh@365 -- # decimal 1 00:21:39.417 13:41:47 event -- scripts/common.sh@353 -- # local d=1 00:21:39.417 13:41:47 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.417 13:41:47 event -- scripts/common.sh@355 -- # echo 1 00:21:39.417 13:41:47 event -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.417 13:41:47 event -- scripts/common.sh@366 -- # decimal 2 00:21:39.417 13:41:47 event -- scripts/common.sh@353 -- # local d=2 00:21:39.417 13:41:47 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.417 13:41:47 event -- scripts/common.sh@355 -- # echo 2 00:21:39.417 13:41:47 event -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.417 13:41:47 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.417 13:41:47 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.417 13:41:47 event -- scripts/common.sh@368 -- # return 0 00:21:39.417 13:41:47 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.417 13:41:47 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:39.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.417 --rc genhtml_branch_coverage=1 00:21:39.417 --rc genhtml_function_coverage=1 00:21:39.417 --rc genhtml_legend=1 00:21:39.417 --rc geninfo_all_blocks=1 00:21:39.417 --rc geninfo_unexecuted_blocks=1 00:21:39.417 00:21:39.417 ' 00:21:39.417 13:41:47 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:39.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.417 --rc genhtml_branch_coverage=1 00:21:39.417 --rc genhtml_function_coverage=1 00:21:39.417 --rc genhtml_legend=1 00:21:39.417 --rc geninfo_all_blocks=1 00:21:39.417 --rc geninfo_unexecuted_blocks=1 00:21:39.417 00:21:39.417 ' 00:21:39.417 13:41:47 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:39.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.417 --rc genhtml_branch_coverage=1 00:21:39.418 --rc genhtml_function_coverage=1 00:21:39.418 --rc genhtml_legend=1 00:21:39.418 --rc geninfo_all_blocks=1 00:21:39.418 --rc geninfo_unexecuted_blocks=1 00:21:39.418 00:21:39.418 ' 00:21:39.418 13:41:47 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:39.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.418 --rc genhtml_branch_coverage=1 00:21:39.418 --rc genhtml_function_coverage=1 00:21:39.418 --rc genhtml_legend=1 00:21:39.418 --rc geninfo_all_blocks=1 00:21:39.418 --rc geninfo_unexecuted_blocks=1 00:21:39.418 00:21:39.418 ' 00:21:39.418 13:41:47 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:39.418 13:41:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:21:39.418 13:41:47 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:39.418 13:41:47 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:21:39.418 13:41:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.418 13:41:47 event -- common/autotest_common.sh@10 -- # set +x 00:21:39.418 ************************************ 00:21:39.418 START TEST event_perf 00:21:39.418 ************************************ 00:21:39.418 13:41:47 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:21:39.418 Running I/O for 1 seconds...[2024-11-20 13:41:47.132117] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:39.418 [2024-11-20 13:41:47.132349] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59316 ] 00:21:39.677 [2024-11-20 13:41:47.318234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.937 [2024-11-20 13:41:47.455158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.937 [2024-11-20 13:41:47.455313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.937 [2024-11-20 13:41:47.455371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:39.937 [2024-11-20 13:41:47.455400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.316 Running I/O for 1 seconds... 00:21:41.317 lcore 0: 185871 00:21:41.317 lcore 1: 185870 00:21:41.317 lcore 2: 185869 00:21:41.317 lcore 3: 185869 00:21:41.317 done. 00:21:41.317 00:21:41.317 real 0m1.653s 00:21:41.317 user 0m4.400s 00:21:41.317 sys 0m0.130s 00:21:41.317 13:41:48 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:41.317 13:41:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:21:41.317 ************************************ 00:21:41.317 END TEST event_perf 00:21:41.317 ************************************ 00:21:41.317 13:41:48 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:41.317 13:41:48 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:41.317 13:41:48 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.317 13:41:48 event -- common/autotest_common.sh@10 -- # set +x 00:21:41.317 ************************************ 00:21:41.317 START TEST event_reactor 00:21:41.317 ************************************ 00:21:41.317 13:41:48 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:21:41.317 [2024-11-20 13:41:48.856262] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:41.317 [2024-11-20 13:41:48.856470] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59356 ] 00:21:41.575 [2024-11-20 13:41:49.037134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.575 [2024-11-20 13:41:49.176667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.951 test_start 00:21:42.951 oneshot 00:21:42.951 tick 100 00:21:42.951 tick 100 00:21:42.951 tick 250 00:21:42.951 tick 100 00:21:42.951 tick 100 00:21:42.951 tick 250 00:21:42.951 tick 100 00:21:42.951 tick 500 00:21:42.951 tick 100 00:21:42.951 tick 100 00:21:42.951 tick 250 00:21:42.951 tick 100 00:21:42.951 tick 100 00:21:42.951 test_end 00:21:42.951 00:21:42.951 real 0m1.621s 00:21:42.951 user 0m1.390s 00:21:42.951 sys 0m0.119s 00:21:42.951 ************************************ 00:21:42.951 END TEST event_reactor 00:21:42.951 ************************************ 00:21:42.951 13:41:50 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:42.951 13:41:50 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:21:42.951 13:41:50 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:42.951 13:41:50 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:21:42.951 13:41:50 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:42.951 13:41:50 event -- common/autotest_common.sh@10 -- # set +x 00:21:42.951 ************************************ 00:21:42.951 START TEST event_reactor_perf 00:21:42.951 ************************************ 00:21:42.951 13:41:50 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:21:42.951 [2024-11-20 13:41:50.533880] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:42.951 [2024-11-20 13:41:50.534080] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59392 ] 00:21:43.210 [2024-11-20 13:41:50.698585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.210 [2024-11-20 13:41:50.831643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.589 test_start 00:21:44.589 test_end 00:21:44.589 Performance: 335091 events per second 00:21:44.589 00:21:44.589 real 0m1.600s 00:21:44.589 user 0m1.391s 00:21:44.589 sys 0m0.101s 00:21:44.589 13:41:52 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:44.589 13:41:52 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:21:44.589 ************************************ 00:21:44.589 END TEST event_reactor_perf 00:21:44.589 ************************************ 00:21:44.589 13:41:52 event -- event/event.sh@49 -- # uname -s 00:21:44.589 13:41:52 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:21:44.589 13:41:52 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:44.589 13:41:52 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:44.589 13:41:52 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:44.589 13:41:52 event -- common/autotest_common.sh@10 -- # set +x 00:21:44.589 ************************************ 00:21:44.589 START TEST event_scheduler 00:21:44.589 ************************************ 00:21:44.589 13:41:52 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:21:44.589 * Looking for test storage... 00:21:44.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:21:44.589 13:41:52 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:44.589 13:41:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:21:44.589 13:41:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:44.848 13:41:52 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:44.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.848 --rc genhtml_branch_coverage=1 00:21:44.848 --rc genhtml_function_coverage=1 00:21:44.848 --rc genhtml_legend=1 00:21:44.848 --rc geninfo_all_blocks=1 00:21:44.848 --rc geninfo_unexecuted_blocks=1 00:21:44.848 00:21:44.848 ' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:44.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.848 --rc genhtml_branch_coverage=1 00:21:44.848 --rc genhtml_function_coverage=1 00:21:44.848 --rc genhtml_legend=1 00:21:44.848 --rc geninfo_all_blocks=1 00:21:44.848 --rc geninfo_unexecuted_blocks=1 00:21:44.848 00:21:44.848 ' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:44.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.848 --rc genhtml_branch_coverage=1 00:21:44.848 --rc genhtml_function_coverage=1 00:21:44.848 --rc genhtml_legend=1 00:21:44.848 --rc geninfo_all_blocks=1 00:21:44.848 --rc geninfo_unexecuted_blocks=1 00:21:44.848 00:21:44.848 ' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:44.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:44.848 --rc genhtml_branch_coverage=1 00:21:44.848 --rc genhtml_function_coverage=1 00:21:44.848 --rc genhtml_legend=1 00:21:44.848 --rc geninfo_all_blocks=1 00:21:44.848 --rc geninfo_unexecuted_blocks=1 00:21:44.848 00:21:44.848 ' 00:21:44.848 13:41:52 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:21:44.848 13:41:52 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59468 00:21:44.848 13:41:52 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:21:44.848 13:41:52 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:21:44.848 13:41:52 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59468 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59468 ']' 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:44.848 13:41:52 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:44.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:44.849 13:41:52 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:44.849 13:41:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:44.849 [2024-11-20 13:41:52.480110] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:44.849 [2024-11-20 13:41:52.480339] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59468 ] 00:21:45.108 [2024-11-20 13:41:52.661454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.108 [2024-11-20 13:41:52.819834] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.108 [2024-11-20 13:41:52.819917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.108 [2024-11-20 13:41:52.820088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.108 [2024-11-20 13:41:52.820120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:21:45.677 13:41:53 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:45.677 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:45.677 POWER: Cannot set governor of lcore 0 to userspace 00:21:45.677 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:45.677 POWER: Cannot set governor of lcore 0 to performance 00:21:45.677 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:45.677 POWER: Cannot set governor of lcore 0 to userspace 00:21:45.677 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:21:45.677 POWER: Cannot set governor of lcore 0 to userspace 00:21:45.677 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:21:45.677 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:21:45.677 POWER: Unable to set Power Management Environment for lcore 0 00:21:45.677 [2024-11-20 13:41:53.357703] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:21:45.677 [2024-11-20 13:41:53.357746] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:21:45.677 [2024-11-20 13:41:53.357760] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:21:45.677 [2024-11-20 13:41:53.357787] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:21:45.677 [2024-11-20 13:41:53.357797] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:21:45.677 [2024-11-20 13:41:53.357810] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.677 13:41:53 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.677 13:41:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 [2024-11-20 13:41:53.786355] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:21:46.274 13:41:53 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:21:46.274 13:41:53 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:46.274 13:41:53 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 ************************************ 00:21:46.274 START TEST scheduler_create_thread 00:21:46.274 ************************************ 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 2 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 3 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 4 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 5 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 6 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 7 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 8 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 9 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:46.274 10 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.274 13:41:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:47.654 13:41:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.654 13:41:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:21:47.654 13:41:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:21:47.654 13:41:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.654 13:41:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:48.595 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.595 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:21:48.595 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.595 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:49.530 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.530 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:21:49.530 13:41:56 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:21:49.530 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.530 13:41:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:50.100 ************************************ 00:21:50.100 END TEST scheduler_create_thread 00:21:50.100 ************************************ 00:21:50.100 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.100 00:21:50.100 real 0m3.886s 00:21:50.100 user 0m0.034s 00:21:50.100 sys 0m0.006s 00:21:50.100 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:50.100 13:41:57 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:21:50.100 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:21:50.100 13:41:57 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59468 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59468 ']' 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59468 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59468 00:21:50.100 killing process with pid 59468 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59468' 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59468 00:21:50.100 13:41:57 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59468 00:21:50.359 [2024-11-20 13:41:58.062654] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:21:51.734 00:21:51.734 real 0m7.246s 00:21:51.734 user 0m14.828s 00:21:51.734 sys 0m0.608s 00:21:51.734 13:41:59 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:51.734 ************************************ 00:21:51.734 END TEST event_scheduler 00:21:51.734 ************************************ 00:21:51.734 13:41:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:21:51.992 13:41:59 event -- event/event.sh@51 -- # modprobe -n nbd 00:21:51.992 13:41:59 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:21:51.992 13:41:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:51.992 13:41:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:51.992 13:41:59 event -- common/autotest_common.sh@10 -- # set +x 00:21:51.992 ************************************ 00:21:51.992 START TEST app_repeat 00:21:51.992 ************************************ 00:21:51.992 13:41:59 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:21:51.993 Process app_repeat pid: 59596 00:21:51.993 spdk_app_start Round 0 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59596 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59596' 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:21:51.993 13:41:59 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59596 /var/tmp/spdk-nbd.sock 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59596 ']' 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:51.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.993 13:41:59 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:51.993 [2024-11-20 13:41:59.548588] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:21:51.993 [2024-11-20 13:41:59.548737] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59596 ] 00:21:52.251 [2024-11-20 13:41:59.730047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:52.252 [2024-11-20 13:41:59.865173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.252 [2024-11-20 13:41:59.865207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:52.820 13:42:00 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:52.820 13:42:00 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:52.820 13:42:00 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:53.388 Malloc0 00:21:53.388 13:42:00 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:53.649 Malloc1 00:21:53.649 13:42:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:53.649 13:42:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:21:53.908 /dev/nbd0 00:21:53.908 13:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:53.908 13:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:53.908 13:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:53.909 1+0 records in 00:21:53.909 1+0 records out 00:21:53.909 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277103 s, 14.8 MB/s 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:53.909 13:42:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:53.909 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:53.909 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:53.909 13:42:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:21:54.168 /dev/nbd1 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:21:54.168 1+0 records in 00:21:54.168 1+0 records out 00:21:54.168 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00041865 s, 9.8 MB/s 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:54.168 13:42:01 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.168 13:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:54.427 13:42:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:54.427 { 00:21:54.427 "nbd_device": "/dev/nbd0", 00:21:54.427 "bdev_name": "Malloc0" 00:21:54.427 }, 00:21:54.427 { 00:21:54.427 "nbd_device": "/dev/nbd1", 00:21:54.427 "bdev_name": "Malloc1" 00:21:54.427 } 00:21:54.427 ]' 00:21:54.427 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:54.427 { 00:21:54.427 "nbd_device": "/dev/nbd0", 00:21:54.427 "bdev_name": "Malloc0" 00:21:54.427 }, 00:21:54.427 { 00:21:54.427 "nbd_device": "/dev/nbd1", 00:21:54.427 "bdev_name": "Malloc1" 00:21:54.428 } 00:21:54.428 ]' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:21:54.428 /dev/nbd1' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:21:54.428 /dev/nbd1' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:21:54.428 256+0 records in 00:21:54.428 256+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767418 s, 137 MB/s 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:54.428 256+0 records in 00:21:54.428 256+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0274986 s, 38.1 MB/s 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:21:54.428 256+0 records in 00:21:54.428 256+0 records out 00:21:54.428 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269439 s, 38.9 MB/s 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:54.428 13:42:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:54.686 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:54.947 13:42:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:55.208 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:55.468 13:42:02 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:21:55.468 13:42:02 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:21:56.037 13:42:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:21:57.416 [2024-11-20 13:42:04.724156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:57.416 [2024-11-20 13:42:04.851736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.416 [2024-11-20 13:42:04.851755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.416 [2024-11-20 13:42:05.075098] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:21:57.416 [2024-11-20 13:42:05.075194] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:21:58.806 spdk_app_start Round 1 00:21:58.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:58.806 13:42:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:21:58.806 13:42:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:21:58.806 13:42:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59596 /var/tmp/spdk-nbd.sock 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59596 ']' 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:58.806 13:42:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:21:59.065 13:42:06 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:59.065 13:42:06 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:21:59.065 13:42:06 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:59.323 Malloc0 00:21:59.323 13:42:07 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:21:59.893 Malloc1 00:21:59.893 13:42:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:21:59.893 13:42:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:22:00.152 /dev/nbd0 00:22:00.152 13:42:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:00.152 13:42:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:00.152 1+0 records in 00:22:00.152 1+0 records out 00:22:00.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000561056 s, 7.3 MB/s 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.152 13:42:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:22:00.152 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.152 13:42:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:00.152 13:42:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:22:00.412 /dev/nbd1 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:00.412 1+0 records in 00:22:00.412 1+0 records out 00:22:00.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000780583 s, 5.2 MB/s 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:00.412 13:42:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.412 13:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:00.671 { 00:22:00.671 "nbd_device": "/dev/nbd0", 00:22:00.671 "bdev_name": "Malloc0" 00:22:00.671 }, 00:22:00.671 { 00:22:00.671 "nbd_device": "/dev/nbd1", 00:22:00.671 "bdev_name": "Malloc1" 00:22:00.671 } 00:22:00.671 ]' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:00.671 { 00:22:00.671 "nbd_device": "/dev/nbd0", 00:22:00.671 "bdev_name": "Malloc0" 00:22:00.671 }, 00:22:00.671 { 00:22:00.671 "nbd_device": "/dev/nbd1", 00:22:00.671 "bdev_name": "Malloc1" 00:22:00.671 } 00:22:00.671 ]' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:00.671 /dev/nbd1' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:00.671 /dev/nbd1' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:00.671 13:42:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:22:00.930 256+0 records in 00:22:00.930 256+0 records out 00:22:00.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147926 s, 70.9 MB/s 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:00.930 256+0 records in 00:22:00.930 256+0 records out 00:22:00.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288735 s, 36.3 MB/s 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:00.930 256+0 records in 00:22:00.930 256+0 records out 00:22:00.930 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0228473 s, 45.9 MB/s 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:00.930 13:42:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:01.189 13:42:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:01.450 13:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:01.708 13:42:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:22:01.708 13:42:09 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:22:02.275 13:42:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:22:03.654 [2024-11-20 13:42:11.206420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:03.654 [2024-11-20 13:42:11.342306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.654 [2024-11-20 13:42:11.342324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.913 [2024-11-20 13:42:11.566686] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:22:03.913 [2024-11-20 13:42:11.566899] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:22:05.290 spdk_app_start Round 2 00:22:05.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:05.290 13:42:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:22:05.290 13:42:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:22:05.290 13:42:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59596 /var/tmp/spdk-nbd.sock 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59596 ']' 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.290 13:42:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:05.550 13:42:13 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.550 13:42:13 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:22:05.550 13:42:13 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:05.808 Malloc0 00:22:05.808 13:42:13 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:22:06.067 Malloc1 00:22:06.326 13:42:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.326 13:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:22:06.326 /dev/nbd0 00:22:06.326 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:22:06.585 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:22:06.585 13:42:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:22:06.585 13:42:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:22:06.585 13:42:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:06.585 13:42:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:06.586 1+0 records in 00:22:06.586 1+0 records out 00:22:06.586 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039192 s, 10.5 MB/s 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:06.586 13:42:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:22:06.586 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.586 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.586 13:42:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:22:06.586 /dev/nbd1 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:22:06.844 1+0 records in 00:22:06.844 1+0 records out 00:22:06.844 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363812 s, 11.3 MB/s 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:22:06.844 13:42:14 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:06.844 13:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:22:07.104 { 00:22:07.104 "nbd_device": "/dev/nbd0", 00:22:07.104 "bdev_name": "Malloc0" 00:22:07.104 }, 00:22:07.104 { 00:22:07.104 "nbd_device": "/dev/nbd1", 00:22:07.104 "bdev_name": "Malloc1" 00:22:07.104 } 00:22:07.104 ]' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:22:07.104 { 00:22:07.104 "nbd_device": "/dev/nbd0", 00:22:07.104 "bdev_name": "Malloc0" 00:22:07.104 }, 00:22:07.104 { 00:22:07.104 "nbd_device": "/dev/nbd1", 00:22:07.104 "bdev_name": "Malloc1" 00:22:07.104 } 00:22:07.104 ]' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:22:07.104 /dev/nbd1' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:22:07.104 /dev/nbd1' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:22:07.104 256+0 records in 00:22:07.104 256+0 records out 00:22:07.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012567 s, 83.4 MB/s 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:22:07.104 256+0 records in 00:22:07.104 256+0 records out 00:22:07.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0221062 s, 47.4 MB/s 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:22:07.104 256+0 records in 00:22:07.104 256+0 records out 00:22:07.104 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251952 s, 41.6 MB/s 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.104 13:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:22:07.364 13:42:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:22:07.623 13:42:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:22:07.883 13:42:15 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:22:07.883 13:42:15 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:22:08.451 13:42:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:22:09.830 [2024-11-20 13:42:17.249540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:09.830 [2024-11-20 13:42:17.368881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.830 [2024-11-20 13:42:17.368886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:10.089 [2024-11-20 13:42:17.583900] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:22:10.089 [2024-11-20 13:42:17.584090] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:22:11.467 13:42:19 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59596 /var/tmp/spdk-nbd.sock 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59596 ']' 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:22:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.467 13:42:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:22:11.726 13:42:19 event.app_repeat -- event/event.sh@39 -- # killprocess 59596 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59596 ']' 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59596 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59596 00:22:11.726 killing process with pid 59596 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59596' 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59596 00:22:11.726 13:42:19 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59596 00:22:13.106 spdk_app_start is called in Round 0. 00:22:13.106 Shutdown signal received, stop current app iteration 00:22:13.106 Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 reinitialization... 00:22:13.106 spdk_app_start is called in Round 1. 00:22:13.106 Shutdown signal received, stop current app iteration 00:22:13.106 Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 reinitialization... 00:22:13.106 spdk_app_start is called in Round 2. 00:22:13.106 Shutdown signal received, stop current app iteration 00:22:13.106 Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 reinitialization... 00:22:13.106 spdk_app_start is called in Round 3. 00:22:13.106 Shutdown signal received, stop current app iteration 00:22:13.106 13:42:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:22:13.106 13:42:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:22:13.106 00:22:13.106 real 0m20.970s 00:22:13.106 user 0m45.557s 00:22:13.106 sys 0m3.087s 00:22:13.106 13:42:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.106 13:42:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:22:13.106 ************************************ 00:22:13.106 END TEST app_repeat 00:22:13.106 ************************************ 00:22:13.106 13:42:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:22:13.106 13:42:20 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:22:13.106 13:42:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:13.107 13:42:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.107 13:42:20 event -- common/autotest_common.sh@10 -- # set +x 00:22:13.107 ************************************ 00:22:13.107 START TEST cpu_locks 00:22:13.107 ************************************ 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:22:13.107 * Looking for test storage... 00:22:13.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.107 13:42:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:13.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.107 --rc genhtml_branch_coverage=1 00:22:13.107 --rc genhtml_function_coverage=1 00:22:13.107 --rc genhtml_legend=1 00:22:13.107 --rc geninfo_all_blocks=1 00:22:13.107 --rc geninfo_unexecuted_blocks=1 00:22:13.107 00:22:13.107 ' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:13.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.107 --rc genhtml_branch_coverage=1 00:22:13.107 --rc genhtml_function_coverage=1 00:22:13.107 --rc genhtml_legend=1 00:22:13.107 --rc geninfo_all_blocks=1 00:22:13.107 --rc geninfo_unexecuted_blocks=1 00:22:13.107 00:22:13.107 ' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:13.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.107 --rc genhtml_branch_coverage=1 00:22:13.107 --rc genhtml_function_coverage=1 00:22:13.107 --rc genhtml_legend=1 00:22:13.107 --rc geninfo_all_blocks=1 00:22:13.107 --rc geninfo_unexecuted_blocks=1 00:22:13.107 00:22:13.107 ' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:13.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.107 --rc genhtml_branch_coverage=1 00:22:13.107 --rc genhtml_function_coverage=1 00:22:13.107 --rc genhtml_legend=1 00:22:13.107 --rc geninfo_all_blocks=1 00:22:13.107 --rc geninfo_unexecuted_blocks=1 00:22:13.107 00:22:13.107 ' 00:22:13.107 13:42:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:22:13.107 13:42:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:22:13.107 13:42:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:22:13.107 13:42:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.107 13:42:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:13.107 ************************************ 00:22:13.107 START TEST default_locks 00:22:13.107 ************************************ 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60062 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60062 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60062 ']' 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:13.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:13.107 13:42:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:13.368 [2024-11-20 13:42:20.872197] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:13.368 [2024-11-20 13:42:20.872365] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:22:13.368 [2024-11-20 13:42:21.056296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.628 [2024-11-20 13:42:21.187112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.567 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:14.567 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:22:14.567 13:42:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60062 00:22:14.567 13:42:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60062 00:22:14.567 13:42:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60062 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60062 ']' 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60062 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60062 00:22:14.827 killing process with pid 60062 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60062' 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60062 00:22:14.827 13:42:22 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60062 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60062 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60062 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60062 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60062 ']' 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.366 ERROR: process (pid: 60062) is no longer running 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:17.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60062) - No such process 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:22:17.366 00:22:17.366 real 0m4.274s 00:22:17.366 user 0m4.182s 00:22:17.366 sys 0m0.645s 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.366 13:42:25 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:22:17.366 ************************************ 00:22:17.366 END TEST default_locks 00:22:17.366 ************************************ 00:22:17.626 13:42:25 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:22:17.626 13:42:25 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:17.626 13:42:25 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.626 13:42:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:17.626 ************************************ 00:22:17.626 START TEST default_locks_via_rpc 00:22:17.626 ************************************ 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60139 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:17.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60139 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60139 ']' 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:17.626 13:42:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:17.626 [2024-11-20 13:42:25.203704] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:17.626 [2024-11-20 13:42:25.203847] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 00:22:17.886 [2024-11-20 13:42:25.365560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.886 [2024-11-20 13:42:25.499567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60139 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60139 00:22:18.827 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60139 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60139 ']' 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60139 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60139 00:22:19.410 killing process with pid 60139 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60139' 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60139 00:22:19.410 13:42:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60139 00:22:21.949 ************************************ 00:22:21.949 END TEST default_locks_via_rpc 00:22:21.949 ************************************ 00:22:21.949 00:22:21.949 real 0m4.425s 00:22:21.949 user 0m4.357s 00:22:21.949 sys 0m0.689s 00:22:21.949 13:42:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.949 13:42:29 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 13:42:29 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:22:21.949 13:42:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:21.949 13:42:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.949 13:42:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:21.949 ************************************ 00:22:21.949 START TEST non_locking_app_on_locked_coremask 00:22:21.949 ************************************ 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60214 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60214 /var/tmp/spdk.sock 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60214 ']' 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:21.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:21.949 13:42:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:22.209 [2024-11-20 13:42:29.689460] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:22.210 [2024-11-20 13:42:29.689595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60214 ] 00:22:22.210 [2024-11-20 13:42:29.872477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.468 [2024-11-20 13:42:30.004929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60236 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60236 /var/tmp/spdk2.sock 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60236 ']' 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:23.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.406 13:42:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:23.406 [2024-11-20 13:42:31.061140] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:23.406 [2024-11-20 13:42:31.061379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60236 ] 00:22:23.665 [2024-11-20 13:42:31.239607] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:23.665 [2024-11-20 13:42:31.239663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.925 [2024-11-20 13:42:31.489522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.464 13:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:26.465 13:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:26.465 13:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60214 00:22:26.465 13:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60214 00:22:26.465 13:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60214 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60214 ']' 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60214 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60214 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:26.465 killing process with pid 60214 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60214' 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60214 00:22:26.465 13:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60214 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60236 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60236 ']' 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60236 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.746 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60236 00:22:32.006 killing process with pid 60236 00:22:32.006 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.006 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.006 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60236' 00:22:32.006 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60236 00:22:32.006 13:42:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60236 00:22:34.544 ************************************ 00:22:34.544 END TEST non_locking_app_on_locked_coremask 00:22:34.544 ************************************ 00:22:34.544 00:22:34.544 real 0m12.642s 00:22:34.544 user 0m12.932s 00:22:34.544 sys 0m1.295s 00:22:34.544 13:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.544 13:42:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:34.803 13:42:42 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:22:34.803 13:42:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:34.803 13:42:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.803 13:42:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:34.803 ************************************ 00:22:34.803 START TEST locking_app_on_unlocked_coremask 00:22:34.803 ************************************ 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:22:34.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60397 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60397 /var/tmp/spdk.sock 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60397 ']' 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.803 13:42:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:34.803 [2024-11-20 13:42:42.377100] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:34.803 [2024-11-20 13:42:42.377222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60397 ] 00:22:35.061 [2024-11-20 13:42:42.558053] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:35.061 [2024-11-20 13:42:42.558194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:35.061 [2024-11-20 13:42:42.689346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60413 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60413 /var/tmp/spdk2.sock 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60413 ']' 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:35.999 13:42:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:36.258 [2024-11-20 13:42:43.769688] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:36.258 [2024-11-20 13:42:43.769829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60413 ] 00:22:36.258 [2024-11-20 13:42:43.952637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:36.517 [2024-11-20 13:42:44.213953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.065 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:39.065 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:39.065 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60413 00:22:39.065 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60413 00:22:39.065 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60397 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60397 ']' 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60397 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60397 00:22:39.325 killing process with pid 60397 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60397' 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60397 00:22:39.325 13:42:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60397 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60413 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60413 ']' 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60413 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60413 00:22:44.638 killing process with pid 60413 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60413' 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60413 00:22:44.638 13:42:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60413 00:22:47.927 00:22:47.927 real 0m12.736s 00:22:47.927 user 0m13.065s 00:22:47.927 sys 0m1.257s 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:47.927 ************************************ 00:22:47.927 END TEST locking_app_on_unlocked_coremask 00:22:47.927 ************************************ 00:22:47.927 13:42:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:22:47.927 13:42:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:47.927 13:42:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:47.927 13:42:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:47.927 ************************************ 00:22:47.927 START TEST locking_app_on_locked_coremask 00:22:47.927 ************************************ 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60572 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:47.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60572 /var/tmp/spdk.sock 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60572 ']' 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.927 13:42:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:47.927 [2024-11-20 13:42:55.186609] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:47.927 [2024-11-20 13:42:55.186763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60572 ] 00:22:47.927 [2024-11-20 13:42:55.354019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.927 [2024-11-20 13:42:55.485601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60588 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60588 /var/tmp/spdk2.sock 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60588 /var/tmp/spdk2.sock 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:22:48.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:48.870 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60588 /var/tmp/spdk2.sock 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60588 ']' 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.871 13:42:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:49.142 [2024-11-20 13:42:56.606827] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:49.142 [2024-11-20 13:42:56.607046] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60588 ] 00:22:49.142 [2024-11-20 13:42:56.792358] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60572 has claimed it. 00:22:49.142 [2024-11-20 13:42:56.792431] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:49.710 ERROR: process (pid: 60588) is no longer running 00:22:49.710 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60588) - No such process 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60572 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60572 00:22:49.710 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60572 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60572 ']' 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60572 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:49.969 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60572 00:22:50.229 killing process with pid 60572 00:22:50.229 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:50.229 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:50.229 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60572' 00:22:50.229 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60572 00:22:50.229 13:42:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60572 00:22:53.538 ************************************ 00:22:53.538 END TEST locking_app_on_locked_coremask 00:22:53.538 ************************************ 00:22:53.538 00:22:53.538 real 0m5.486s 00:22:53.538 user 0m5.690s 00:22:53.538 sys 0m0.881s 00:22:53.538 13:43:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:53.538 13:43:00 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:53.538 13:43:00 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:22:53.538 13:43:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:53.538 13:43:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:53.538 13:43:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:53.538 ************************************ 00:22:53.538 START TEST locking_overlapped_coremask 00:22:53.538 ************************************ 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60663 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60663 /var/tmp/spdk.sock 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60663 ']' 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:53.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:53.538 13:43:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:53.538 [2024-11-20 13:43:00.717952] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:53.538 [2024-11-20 13:43:00.718104] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60663 ] 00:22:53.538 [2024-11-20 13:43:00.896685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:53.538 [2024-11-20 13:43:01.040437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.538 [2024-11-20 13:43:01.040561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.538 [2024-11-20 13:43:01.040612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60692 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60692 /var/tmp/spdk2.sock 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60692 /var/tmp/spdk2.sock 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60692 /var/tmp/spdk2.sock 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60692 ']' 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:54.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.474 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:54.732 [2024-11-20 13:43:02.225125] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:54.732 [2024-11-20 13:43:02.225376] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60692 ] 00:22:54.732 [2024-11-20 13:43:02.415397] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60663 has claimed it. 00:22:54.732 [2024-11-20 13:43:02.415501] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:22:55.299 ERROR: process (pid: 60692) is no longer running 00:22:55.299 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60692) - No such process 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60663 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60663 ']' 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60663 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60663 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60663' 00:22:55.299 killing process with pid 60663 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60663 00:22:55.299 13:43:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60663 00:22:58.579 00:22:58.579 real 0m5.216s 00:22:58.579 user 0m14.296s 00:22:58.579 sys 0m0.662s 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 ************************************ 00:22:58.579 END TEST locking_overlapped_coremask 00:22:58.579 ************************************ 00:22:58.579 13:43:05 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:22:58.579 13:43:05 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:58.579 13:43:05 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:58.579 13:43:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 ************************************ 00:22:58.579 START TEST locking_overlapped_coremask_via_rpc 00:22:58.579 ************************************ 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60762 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60762 /var/tmp/spdk.sock 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60762 ']' 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.579 13:43:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:58.579 [2024-11-20 13:43:05.987663] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:58.579 [2024-11-20 13:43:05.987826] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60762 ] 00:22:58.579 [2024-11-20 13:43:06.171145] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:22:58.579 [2024-11-20 13:43:06.171223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:58.837 [2024-11-20 13:43:06.311183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.837 [2024-11-20 13:43:06.311335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.837 [2024-11-20 13:43:06.311369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60785 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60785 /var/tmp/spdk2.sock 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:22:59.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.823 13:43:07 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:22:59.823 [2024-11-20 13:43:07.479392] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:22:59.823 [2024-11-20 13:43:07.479630] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60785 ] 00:23:00.102 [2024-11-20 13:43:07.669933] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:23:00.102 [2024-11-20 13:43:07.670021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:00.360 [2024-11-20 13:43:07.950433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:00.360 [2024-11-20 13:43:07.950493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.360 [2024-11-20 13:43:07.950501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:02.886 [2024-11-20 13:43:10.373997] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60762 has claimed it. 00:23:02.886 request: 00:23:02.886 { 00:23:02.886 "method": "framework_enable_cpumask_locks", 00:23:02.886 "req_id": 1 00:23:02.886 } 00:23:02.886 Got JSON-RPC error response 00:23:02.886 response: 00:23:02.886 { 00:23:02.886 "code": -32603, 00:23:02.886 "message": "Failed to claim CPU core: 2" 00:23:02.886 } 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60762 /var/tmp/spdk.sock 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60762 ']' 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.886 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60785 /var/tmp/spdk2.sock 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:23:03.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:03.144 13:43:10 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:23:03.403 00:23:03.403 real 0m5.159s 00:23:03.403 user 0m1.859s 00:23:03.403 sys 0m0.225s 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:03.403 13:43:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:23:03.403 ************************************ 00:23:03.403 END TEST locking_overlapped_coremask_via_rpc 00:23:03.403 ************************************ 00:23:03.403 13:43:11 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:23:03.403 13:43:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60762 ]] 00:23:03.403 13:43:11 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60762 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60762 ']' 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60762 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60762 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:03.403 killing process with pid 60762 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60762' 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60762 00:23:03.403 13:43:11 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60762 00:23:06.686 13:43:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60785 ]] 00:23:06.686 13:43:14 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60785 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60785 ']' 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60785 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60785 00:23:06.686 killing process with pid 60785 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60785' 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60785 00:23:06.686 13:43:14 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60785 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60762 ]] 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60762 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60762 ']' 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60762 00:23:10.047 Process with pid 60762 is not found 00:23:10.047 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60762) - No such process 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60762 is not found' 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60785 ]] 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60785 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60785 ']' 00:23:10.047 Process with pid 60785 is not found 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60785 00:23:10.047 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60785) - No such process 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60785 is not found' 00:23:10.047 13:43:17 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:23:10.047 00:23:10.047 real 0m56.524s 00:23:10.047 user 1m39.529s 00:23:10.047 sys 0m6.874s 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.047 ************************************ 00:23:10.047 END TEST cpu_locks 00:23:10.047 ************************************ 00:23:10.047 13:43:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:23:10.047 ************************************ 00:23:10.047 END TEST event 00:23:10.047 ************************************ 00:23:10.047 00:23:10.047 real 1m30.222s 00:23:10.047 user 2m47.330s 00:23:10.047 sys 0m11.299s 00:23:10.047 13:43:17 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:10.047 13:43:17 event -- common/autotest_common.sh@10 -- # set +x 00:23:10.047 13:43:17 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:23:10.047 13:43:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:10.047 13:43:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.047 13:43:17 -- common/autotest_common.sh@10 -- # set +x 00:23:10.047 ************************************ 00:23:10.047 START TEST thread 00:23:10.047 ************************************ 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:23:10.047 * Looking for test storage... 00:23:10.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:10.047 13:43:17 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:10.047 13:43:17 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:10.047 13:43:17 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:10.047 13:43:17 thread -- scripts/common.sh@336 -- # IFS=.-: 00:23:10.047 13:43:17 thread -- scripts/common.sh@336 -- # read -ra ver1 00:23:10.047 13:43:17 thread -- scripts/common.sh@337 -- # IFS=.-: 00:23:10.047 13:43:17 thread -- scripts/common.sh@337 -- # read -ra ver2 00:23:10.047 13:43:17 thread -- scripts/common.sh@338 -- # local 'op=<' 00:23:10.047 13:43:17 thread -- scripts/common.sh@340 -- # ver1_l=2 00:23:10.047 13:43:17 thread -- scripts/common.sh@341 -- # ver2_l=1 00:23:10.047 13:43:17 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:10.047 13:43:17 thread -- scripts/common.sh@344 -- # case "$op" in 00:23:10.047 13:43:17 thread -- scripts/common.sh@345 -- # : 1 00:23:10.047 13:43:17 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:10.047 13:43:17 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:10.047 13:43:17 thread -- scripts/common.sh@365 -- # decimal 1 00:23:10.047 13:43:17 thread -- scripts/common.sh@353 -- # local d=1 00:23:10.047 13:43:17 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:10.047 13:43:17 thread -- scripts/common.sh@355 -- # echo 1 00:23:10.047 13:43:17 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:23:10.047 13:43:17 thread -- scripts/common.sh@366 -- # decimal 2 00:23:10.047 13:43:17 thread -- scripts/common.sh@353 -- # local d=2 00:23:10.047 13:43:17 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:10.047 13:43:17 thread -- scripts/common.sh@355 -- # echo 2 00:23:10.047 13:43:17 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:23:10.047 13:43:17 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:10.047 13:43:17 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:10.047 13:43:17 thread -- scripts/common.sh@368 -- # return 0 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.047 --rc genhtml_branch_coverage=1 00:23:10.047 --rc genhtml_function_coverage=1 00:23:10.047 --rc genhtml_legend=1 00:23:10.047 --rc geninfo_all_blocks=1 00:23:10.047 --rc geninfo_unexecuted_blocks=1 00:23:10.047 00:23:10.047 ' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.047 --rc genhtml_branch_coverage=1 00:23:10.047 --rc genhtml_function_coverage=1 00:23:10.047 --rc genhtml_legend=1 00:23:10.047 --rc geninfo_all_blocks=1 00:23:10.047 --rc geninfo_unexecuted_blocks=1 00:23:10.047 00:23:10.047 ' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.047 --rc genhtml_branch_coverage=1 00:23:10.047 --rc genhtml_function_coverage=1 00:23:10.047 --rc genhtml_legend=1 00:23:10.047 --rc geninfo_all_blocks=1 00:23:10.047 --rc geninfo_unexecuted_blocks=1 00:23:10.047 00:23:10.047 ' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:10.047 --rc genhtml_branch_coverage=1 00:23:10.047 --rc genhtml_function_coverage=1 00:23:10.047 --rc genhtml_legend=1 00:23:10.047 --rc geninfo_all_blocks=1 00:23:10.047 --rc geninfo_unexecuted_blocks=1 00:23:10.047 00:23:10.047 ' 00:23:10.047 13:43:17 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:10.047 13:43:17 thread -- common/autotest_common.sh@10 -- # set +x 00:23:10.047 ************************************ 00:23:10.047 START TEST thread_poller_perf 00:23:10.047 ************************************ 00:23:10.047 13:43:17 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:23:10.047 [2024-11-20 13:43:17.414017] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:10.047 [2024-11-20 13:43:17.414388] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:23:10.047 [2024-11-20 13:43:17.603149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:10.047 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:23:10.047 [2024-11-20 13:43:17.742126] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.423 [2024-11-20T13:43:19.142Z] ====================================== 00:23:11.423 [2024-11-20T13:43:19.142Z] busy:2304809240 (cyc) 00:23:11.423 [2024-11-20T13:43:19.142Z] total_run_count: 315000 00:23:11.423 [2024-11-20T13:43:19.142Z] tsc_hz: 2290000000 (cyc) 00:23:11.423 [2024-11-20T13:43:19.142Z] ====================================== 00:23:11.423 [2024-11-20T13:43:19.142Z] poller_cost: 7316 (cyc), 3194 (nsec) 00:23:11.423 00:23:11.423 real 0m1.648s 00:23:11.423 user 0m1.430s 00:23:11.423 sys 0m0.109s 00:23:11.423 13:43:19 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:11.423 13:43:19 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:23:11.423 ************************************ 00:23:11.423 END TEST thread_poller_perf 00:23:11.423 ************************************ 00:23:11.423 13:43:19 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:23:11.423 13:43:19 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:23:11.423 13:43:19 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:11.423 13:43:19 thread -- common/autotest_common.sh@10 -- # set +x 00:23:11.423 ************************************ 00:23:11.423 START TEST thread_poller_perf 00:23:11.423 ************************************ 00:23:11.423 13:43:19 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:23:11.423 [2024-11-20 13:43:19.099682] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:11.423 [2024-11-20 13:43:19.100059] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61035 ] 00:23:11.756 [2024-11-20 13:43:19.278274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.756 [2024-11-20 13:43:19.423491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.756 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:23:13.132 [2024-11-20T13:43:20.851Z] ====================================== 00:23:13.132 [2024-11-20T13:43:20.851Z] busy:2294380846 (cyc) 00:23:13.132 [2024-11-20T13:43:20.851Z] total_run_count: 4055000 00:23:13.132 [2024-11-20T13:43:20.851Z] tsc_hz: 2290000000 (cyc) 00:23:13.132 [2024-11-20T13:43:20.851Z] ====================================== 00:23:13.132 [2024-11-20T13:43:20.851Z] poller_cost: 565 (cyc), 246 (nsec) 00:23:13.132 ************************************ 00:23:13.132 END TEST thread_poller_perf 00:23:13.132 ************************************ 00:23:13.132 00:23:13.132 real 0m1.646s 00:23:13.132 user 0m1.432s 00:23:13.132 sys 0m0.103s 00:23:13.132 13:43:20 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.132 13:43:20 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:23:13.132 13:43:20 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:23:13.132 00:23:13.132 real 0m3.601s 00:23:13.132 user 0m3.013s 00:23:13.132 sys 0m0.378s 00:23:13.132 13:43:20 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:13.132 13:43:20 thread -- common/autotest_common.sh@10 -- # set +x 00:23:13.132 ************************************ 00:23:13.132 END TEST thread 00:23:13.132 ************************************ 00:23:13.132 13:43:20 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:23:13.132 13:43:20 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:13.132 13:43:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:13.132 13:43:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:13.132 13:43:20 -- common/autotest_common.sh@10 -- # set +x 00:23:13.132 ************************************ 00:23:13.132 START TEST app_cmdline 00:23:13.132 ************************************ 00:23:13.132 13:43:20 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:23:13.391 * Looking for test storage... 00:23:13.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:13.391 13:43:20 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:13.391 13:43:20 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:23:13.391 13:43:20 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:13.391 13:43:20 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@345 -- # : 1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:23:13.391 13:43:20 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.392 13:43:20 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:23:13.392 13:43:20 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.392 13:43:20 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.392 13:43:20 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.392 13:43:20 app_cmdline -- scripts/common.sh@368 -- # return 0 00:23:13.392 13:43:20 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.392 13:43:20 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.392 --rc genhtml_branch_coverage=1 00:23:13.392 --rc genhtml_function_coverage=1 00:23:13.392 --rc genhtml_legend=1 00:23:13.392 --rc geninfo_all_blocks=1 00:23:13.392 --rc geninfo_unexecuted_blocks=1 00:23:13.392 00:23:13.392 ' 00:23:13.392 13:43:20 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.392 --rc genhtml_branch_coverage=1 00:23:13.392 --rc genhtml_function_coverage=1 00:23:13.392 --rc genhtml_legend=1 00:23:13.392 --rc geninfo_all_blocks=1 00:23:13.392 --rc geninfo_unexecuted_blocks=1 00:23:13.392 00:23:13.392 ' 00:23:13.392 13:43:20 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.392 --rc genhtml_branch_coverage=1 00:23:13.392 --rc genhtml_function_coverage=1 00:23:13.392 --rc genhtml_legend=1 00:23:13.392 --rc geninfo_all_blocks=1 00:23:13.392 --rc geninfo_unexecuted_blocks=1 00:23:13.392 00:23:13.392 ' 00:23:13.392 13:43:20 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:13.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.392 --rc genhtml_branch_coverage=1 00:23:13.392 --rc genhtml_function_coverage=1 00:23:13.392 --rc genhtml_legend=1 00:23:13.392 --rc geninfo_all_blocks=1 00:23:13.392 --rc geninfo_unexecuted_blocks=1 00:23:13.392 00:23:13.392 ' 00:23:13.392 13:43:20 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:23:13.392 13:43:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61117 00:23:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.392 13:43:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61117 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61117 ']' 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.392 13:43:21 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.392 13:43:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:13.651 [2024-11-20 13:43:21.118294] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:13.651 [2024-11-20 13:43:21.118499] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:23:13.651 [2024-11-20 13:43:21.290290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:13.911 [2024-11-20 13:43:21.448471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.849 13:43:22 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.849 13:43:22 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:23:14.849 13:43:22 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:23:15.109 { 00:23:15.109 "version": "SPDK v25.01-pre git sha1 d58114851", 00:23:15.109 "fields": { 00:23:15.109 "major": 25, 00:23:15.109 "minor": 1, 00:23:15.109 "patch": 0, 00:23:15.109 "suffix": "-pre", 00:23:15.109 "commit": "d58114851" 00:23:15.109 } 00:23:15.109 } 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:23:15.109 13:43:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:23:15.109 13:43:22 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:23:15.373 request: 00:23:15.373 { 00:23:15.373 "method": "env_dpdk_get_mem_stats", 00:23:15.373 "req_id": 1 00:23:15.373 } 00:23:15.373 Got JSON-RPC error response 00:23:15.373 response: 00:23:15.373 { 00:23:15.373 "code": -32601, 00:23:15.373 "message": "Method not found" 00:23:15.373 } 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.373 13:43:23 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61117 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61117 ']' 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61117 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.373 13:43:23 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61117 00:23:15.641 killing process with pid 61117 00:23:15.641 13:43:23 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:15.641 13:43:23 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:15.641 13:43:23 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61117' 00:23:15.641 13:43:23 app_cmdline -- common/autotest_common.sh@973 -- # kill 61117 00:23:15.641 13:43:23 app_cmdline -- common/autotest_common.sh@978 -- # wait 61117 00:23:18.930 00:23:18.930 real 0m5.188s 00:23:18.930 user 0m5.650s 00:23:18.930 sys 0m0.600s 00:23:18.930 13:43:25 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.930 13:43:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:23:18.930 ************************************ 00:23:18.930 END TEST app_cmdline 00:23:18.930 ************************************ 00:23:18.930 13:43:26 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:18.930 13:43:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:18.930 13:43:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.930 13:43:26 -- common/autotest_common.sh@10 -- # set +x 00:23:18.930 ************************************ 00:23:18.930 START TEST version 00:23:18.930 ************************************ 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:23:18.930 * Looking for test storage... 00:23:18.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:18.930 13:43:26 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:18.930 13:43:26 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:18.930 13:43:26 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:18.930 13:43:26 version -- scripts/common.sh@336 -- # IFS=.-: 00:23:18.930 13:43:26 version -- scripts/common.sh@336 -- # read -ra ver1 00:23:18.930 13:43:26 version -- scripts/common.sh@337 -- # IFS=.-: 00:23:18.930 13:43:26 version -- scripts/common.sh@337 -- # read -ra ver2 00:23:18.930 13:43:26 version -- scripts/common.sh@338 -- # local 'op=<' 00:23:18.930 13:43:26 version -- scripts/common.sh@340 -- # ver1_l=2 00:23:18.930 13:43:26 version -- scripts/common.sh@341 -- # ver2_l=1 00:23:18.930 13:43:26 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:18.930 13:43:26 version -- scripts/common.sh@344 -- # case "$op" in 00:23:18.930 13:43:26 version -- scripts/common.sh@345 -- # : 1 00:23:18.930 13:43:26 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:18.930 13:43:26 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:18.930 13:43:26 version -- scripts/common.sh@365 -- # decimal 1 00:23:18.930 13:43:26 version -- scripts/common.sh@353 -- # local d=1 00:23:18.930 13:43:26 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:18.930 13:43:26 version -- scripts/common.sh@355 -- # echo 1 00:23:18.930 13:43:26 version -- scripts/common.sh@365 -- # ver1[v]=1 00:23:18.930 13:43:26 version -- scripts/common.sh@366 -- # decimal 2 00:23:18.930 13:43:26 version -- scripts/common.sh@353 -- # local d=2 00:23:18.930 13:43:26 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:18.930 13:43:26 version -- scripts/common.sh@355 -- # echo 2 00:23:18.930 13:43:26 version -- scripts/common.sh@366 -- # ver2[v]=2 00:23:18.930 13:43:26 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:18.930 13:43:26 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:18.930 13:43:26 version -- scripts/common.sh@368 -- # return 0 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.930 --rc genhtml_branch_coverage=1 00:23:18.930 --rc genhtml_function_coverage=1 00:23:18.930 --rc genhtml_legend=1 00:23:18.930 --rc geninfo_all_blocks=1 00:23:18.930 --rc geninfo_unexecuted_blocks=1 00:23:18.930 00:23:18.930 ' 00:23:18.930 13:43:26 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:18.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.931 --rc genhtml_branch_coverage=1 00:23:18.931 --rc genhtml_function_coverage=1 00:23:18.931 --rc genhtml_legend=1 00:23:18.931 --rc geninfo_all_blocks=1 00:23:18.931 --rc geninfo_unexecuted_blocks=1 00:23:18.931 00:23:18.931 ' 00:23:18.931 13:43:26 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.931 --rc genhtml_branch_coverage=1 00:23:18.931 --rc genhtml_function_coverage=1 00:23:18.931 --rc genhtml_legend=1 00:23:18.931 --rc geninfo_all_blocks=1 00:23:18.931 --rc geninfo_unexecuted_blocks=1 00:23:18.931 00:23:18.931 ' 00:23:18.931 13:43:26 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:18.931 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:18.931 --rc genhtml_branch_coverage=1 00:23:18.931 --rc genhtml_function_coverage=1 00:23:18.931 --rc genhtml_legend=1 00:23:18.931 --rc geninfo_all_blocks=1 00:23:18.931 --rc geninfo_unexecuted_blocks=1 00:23:18.931 00:23:18.931 ' 00:23:18.931 13:43:26 version -- app/version.sh@17 -- # get_header_version major 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # tr -d '"' 00:23:18.931 13:43:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # cut -f2 00:23:18.931 13:43:26 version -- app/version.sh@17 -- # major=25 00:23:18.931 13:43:26 version -- app/version.sh@18 -- # get_header_version minor 00:23:18.931 13:43:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # cut -f2 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # tr -d '"' 00:23:18.931 13:43:26 version -- app/version.sh@18 -- # minor=1 00:23:18.931 13:43:26 version -- app/version.sh@19 -- # get_header_version patch 00:23:18.931 13:43:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # tr -d '"' 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # cut -f2 00:23:18.931 13:43:26 version -- app/version.sh@19 -- # patch=0 00:23:18.931 13:43:26 version -- app/version.sh@20 -- # get_header_version suffix 00:23:18.931 13:43:26 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # cut -f2 00:23:18.931 13:43:26 version -- app/version.sh@14 -- # tr -d '"' 00:23:18.931 13:43:26 version -- app/version.sh@20 -- # suffix=-pre 00:23:18.931 13:43:26 version -- app/version.sh@22 -- # version=25.1 00:23:18.931 13:43:26 version -- app/version.sh@25 -- # (( patch != 0 )) 00:23:18.931 13:43:26 version -- app/version.sh@28 -- # version=25.1rc0 00:23:18.931 13:43:26 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:23:18.931 13:43:26 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:23:18.931 13:43:26 version -- app/version.sh@30 -- # py_version=25.1rc0 00:23:18.931 13:43:26 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:23:18.931 ************************************ 00:23:18.931 END TEST version 00:23:18.931 ************************************ 00:23:18.931 00:23:18.931 real 0m0.337s 00:23:18.931 user 0m0.214s 00:23:18.931 sys 0m0.176s 00:23:18.931 13:43:26 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:18.931 13:43:26 version -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 13:43:26 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:23:18.931 13:43:26 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:23:18.931 13:43:26 -- spdk/autotest.sh@194 -- # uname -s 00:23:18.931 13:43:26 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:23:18.931 13:43:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:18.931 13:43:26 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:23:18.931 13:43:26 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:23:18.931 13:43:26 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:23:18.931 13:43:26 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:18.931 13:43:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:18.931 13:43:26 -- common/autotest_common.sh@10 -- # set +x 00:23:18.931 ************************************ 00:23:18.931 START TEST blockdev_nvme 00:23:18.931 ************************************ 00:23:18.931 13:43:26 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:23:18.931 * Looking for test storage... 00:23:18.931 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:23:18.931 13:43:26 blockdev_nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:18.931 13:43:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:23:18.931 13:43:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:19.191 13:43:26 blockdev_nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:19.191 13:43:26 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:23:19.191 13:43:26 blockdev_nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:19.191 13:43:26 blockdev_nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.191 --rc genhtml_branch_coverage=1 00:23:19.191 --rc genhtml_function_coverage=1 00:23:19.191 --rc genhtml_legend=1 00:23:19.191 --rc geninfo_all_blocks=1 00:23:19.191 --rc geninfo_unexecuted_blocks=1 00:23:19.191 00:23:19.191 ' 00:23:19.191 13:43:26 blockdev_nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:19.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.191 --rc genhtml_branch_coverage=1 00:23:19.192 --rc genhtml_function_coverage=1 00:23:19.192 --rc genhtml_legend=1 00:23:19.192 --rc geninfo_all_blocks=1 00:23:19.192 --rc geninfo_unexecuted_blocks=1 00:23:19.192 00:23:19.192 ' 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:19.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.192 --rc genhtml_branch_coverage=1 00:23:19.192 --rc genhtml_function_coverage=1 00:23:19.192 --rc genhtml_legend=1 00:23:19.192 --rc geninfo_all_blocks=1 00:23:19.192 --rc geninfo_unexecuted_blocks=1 00:23:19.192 00:23:19.192 ' 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:19.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:19.192 --rc genhtml_branch_coverage=1 00:23:19.192 --rc genhtml_function_coverage=1 00:23:19.192 --rc genhtml_legend=1 00:23:19.192 --rc geninfo_all_blocks=1 00:23:19.192 --rc geninfo_unexecuted_blocks=1 00:23:19.192 00:23:19.192 ' 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:23:19.192 13:43:26 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61322 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:23:19.192 13:43:26 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61322 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61322 ']' 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.192 13:43:26 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:19.192 [2024-11-20 13:43:26.808065] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:19.192 [2024-11-20 13:43:26.808285] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61322 ] 00:23:19.451 [2024-11-20 13:43:26.989917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.451 [2024-11-20 13:43:27.124667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.392 13:43:28 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.392 13:43:28 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:23:20.392 13:43:28 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:23:20.392 13:43:28 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:23:20.392 13:43:28 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:23:20.392 13:43:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:23:20.392 13:43:28 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:23:20.651 13:43:28 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:23:20.651 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.651 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:23:20.910 13:43:28 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.910 13:43:28 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:21.171 13:43:28 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.171 13:43:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:23:21.171 13:43:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:23:21.172 13:43:28 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "73e5f6a2-b846-4d52-ad9a-3863b7a0142e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "73e5f6a2-b846-4d52-ad9a-3863b7a0142e",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "b035104e-9a4a-4dc6-8a42-4a1bcb9caddd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "b035104e-9a4a-4dc6-8a42-4a1bcb9caddd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "f17c452d-a642-43f1-b232-364bf4f11b76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f17c452d-a642-43f1-b232-364bf4f11b76",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "4599f84b-0029-44ef-bc8e-68e9c005a23c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4599f84b-0029-44ef-bc8e-68e9c005a23c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "47da5891-d402-48fa-84c9-c075c462f411"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "47da5891-d402-48fa-84c9-c075c462f411",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d7ab3248-07fa-4ba7-97ff-5eec9e5d838d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d7ab3248-07fa-4ba7-97ff-5eec9e5d838d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:23:21.172 13:43:28 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:23:21.172 13:43:28 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:23:21.172 13:43:28 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:23:21.172 13:43:28 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 61322 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61322 ']' 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61322 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61322 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61322' 00:23:21.172 killing process with pid 61322 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61322 00:23:21.172 13:43:28 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61322 00:23:23.707 13:43:31 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:23.707 13:43:31 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:23:23.707 13:43:31 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:23:23.707 13:43:31 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.707 13:43:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:23.707 ************************************ 00:23:23.707 START TEST bdev_hello_world 00:23:23.707 ************************************ 00:23:23.707 13:43:31 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:23:23.967 [2024-11-20 13:43:31.477183] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:23.967 [2024-11-20 13:43:31.477308] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61422 ] 00:23:23.967 [2024-11-20 13:43:31.657766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.247 [2024-11-20 13:43:31.788528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:24.815 [2024-11-20 13:43:32.481038] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:23:24.815 [2024-11-20 13:43:32.481097] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:23:24.815 [2024-11-20 13:43:32.481125] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:23:24.815 [2024-11-20 13:43:32.484368] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:23:24.815 [2024-11-20 13:43:32.484812] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:23:24.815 [2024-11-20 13:43:32.484848] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:23:24.815 [2024-11-20 13:43:32.485010] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:23:24.815 00:23:24.815 [2024-11-20 13:43:32.485037] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:23:26.192 00:23:26.192 real 0m2.348s 00:23:26.192 user 0m2.001s 00:23:26.192 sys 0m0.239s 00:23:26.192 13:43:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:26.192 13:43:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:23:26.192 ************************************ 00:23:26.192 END TEST bdev_hello_world 00:23:26.192 ************************************ 00:23:26.192 13:43:33 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:23:26.192 13:43:33 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:26.192 13:43:33 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:26.192 13:43:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:26.192 ************************************ 00:23:26.192 START TEST bdev_bounds 00:23:26.192 ************************************ 00:23:26.192 Process bdevio pid: 61465 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61465 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61465' 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61465 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61465 ']' 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:26.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:26.192 13:43:33 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:26.192 [2024-11-20 13:43:33.888824] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:26.192 [2024-11-20 13:43:33.889070] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61465 ] 00:23:26.451 [2024-11-20 13:43:34.071565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:26.710 [2024-11-20 13:43:34.211508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:26.710 [2024-11-20 13:43:34.211536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:26.710 [2024-11-20 13:43:34.211543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.279 13:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.279 13:43:34 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:23:27.279 13:43:34 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:23:27.538 I/O targets: 00:23:27.538 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:23:27.538 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:23:27.538 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:27.539 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:27.539 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:23:27.539 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:23:27.539 00:23:27.539 00:23:27.539 CUnit - A unit testing framework for C - Version 2.1-3 00:23:27.539 http://cunit.sourceforge.net/ 00:23:27.539 00:23:27.539 00:23:27.539 Suite: bdevio tests on: Nvme3n1 00:23:27.539 Test: blockdev write read block ...passed 00:23:27.539 Test: blockdev write zeroes read block ...passed 00:23:27.539 Test: blockdev write zeroes read no split ...passed 00:23:27.539 Test: blockdev write zeroes read split ...passed 00:23:27.539 Test: blockdev write zeroes read split partial ...passed 00:23:27.539 Test: blockdev reset ...[2024-11-20 13:43:35.145066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:23:27.539 [2024-11-20 13:43:35.149172] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:23:27.539 passed 00:23:27.539 Test: blockdev write read 8 blocks ...passed 00:23:27.539 Test: blockdev write read size > 128k ...passed 00:23:27.539 Test: blockdev write read invalid size ...passed 00:23:27.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.539 Test: blockdev write read max offset ...passed 00:23:27.539 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.539 Test: blockdev writev readv 8 blocks ...passed 00:23:27.539 Test: blockdev writev readv 30 x 1block ...passed 00:23:27.539 Test: blockdev writev readv block ...passed 00:23:27.539 Test: blockdev writev readv size > 128k ...passed 00:23:27.539 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:27.539 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.157130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2afe0a000 len:0x1000 00:23:27.539 [2024-11-20 13:43:35.157278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:27.539 passed 00:23:27.539 Test: blockdev nvme passthru rw ...passed 00:23:27.539 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.158051] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:23:27.539 [2024-11-20 13:43:35.158164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:27.539 passed 00:23:27.539 Test: blockdev nvme admin passthru ...passed 00:23:27.539 Test: blockdev copy ...passed 00:23:27.539 Suite: bdevio tests on: Nvme2n3 00:23:27.539 Test: blockdev write read block ...passed 00:23:27.539 Test: blockdev write zeroes read block ...passed 00:23:27.539 Test: blockdev write zeroes read no split ...passed 00:23:27.539 Test: blockdev write zeroes read split ...passed 00:23:27.539 Test: blockdev write zeroes read split partial ...passed 00:23:27.539 Test: blockdev reset ...[2024-11-20 13:43:35.249242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:23:27.539 [2024-11-20 13:43:35.253881] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:23:27.539 Test: blockdev write read 8 blocks ...passed 00:23:27.539 Test: blockdev write read size > 128k ...uccessful. 00:23:27.539 passed 00:23:27.539 Test: blockdev write read invalid size ...passed 00:23:27.539 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.539 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.539 Test: blockdev write read max offset ...passed 00:23:27.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.799 Test: blockdev writev readv 8 blocks ...passed 00:23:27.799 Test: blockdev writev readv 30 x 1block ...passed 00:23:27.799 Test: blockdev writev readv block ...passed 00:23:27.799 Test: blockdev writev readv size > 128k ...passed 00:23:27.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:27.799 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.262376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x292806000 len:0x1000 00:23:27.799 [2024-11-20 13:43:35.262442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev nvme passthru rw ...passed 00:23:27.799 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.263149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:23:27.799 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:23:27.799 [2024-11-20 13:43:35.263245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev copy ...passed 00:23:27.799 Suite: bdevio tests on: Nvme2n2 00:23:27.799 Test: blockdev write read block ...passed 00:23:27.799 Test: blockdev write zeroes read block ...passed 00:23:27.799 Test: blockdev write zeroes read no split ...passed 00:23:27.799 Test: blockdev write zeroes read split ...passed 00:23:27.799 Test: blockdev write zeroes read split partial ...passed 00:23:27.799 Test: blockdev reset ...[2024-11-20 13:43:35.357007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:23:27.799 [2024-11-20 13:43:35.361578] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:23:27.799 passed 00:23:27.799 Test: blockdev write read 8 blocks ...passed 00:23:27.799 Test: blockdev write read size > 128k ...passed 00:23:27.799 Test: blockdev write read invalid size ...passed 00:23:27.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.799 Test: blockdev write read max offset ...passed 00:23:27.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.799 Test: blockdev writev readv 8 blocks ...passed 00:23:27.799 Test: blockdev writev readv 30 x 1block ...passed 00:23:27.799 Test: blockdev writev readv block ...passed 00:23:27.799 Test: blockdev writev readv size > 128k ...passed 00:23:27.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:27.799 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.369546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfe3c000 len:0x1000 00:23:27.799 [2024-11-20 13:43:35.369691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev nvme passthru rw ...passed 00:23:27.799 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.370443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:23:27.799 [2024-11-20 13:43:35.370549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev nvme admin passthru ...passed 00:23:27.799 Test: blockdev copy ...passed 00:23:27.799 Suite: bdevio tests on: Nvme2n1 00:23:27.799 Test: blockdev write read block ...passed 00:23:27.799 Test: blockdev write zeroes read block ...passed 00:23:27.799 Test: blockdev write zeroes read no split ...passed 00:23:27.799 Test: blockdev write zeroes read split ...passed 00:23:27.799 Test: blockdev write zeroes read split partial ...passed 00:23:27.799 Test: blockdev reset ...[2024-11-20 13:43:35.475143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:23:27.799 [2024-11-20 13:43:35.479673] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:23:27.799 passed 00:23:27.799 Test: blockdev write read 8 blocks ...passed 00:23:27.799 Test: blockdev write read size > 128k ...passed 00:23:27.799 Test: blockdev write read invalid size ...passed 00:23:27.799 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:27.799 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:27.799 Test: blockdev write read max offset ...passed 00:23:27.799 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:27.799 Test: blockdev writev readv 8 blocks ...passed 00:23:27.799 Test: blockdev writev readv 30 x 1block ...passed 00:23:27.799 Test: blockdev writev readv block ...passed 00:23:27.799 Test: blockdev writev readv size > 128k ...passed 00:23:27.799 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:27.799 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.488261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfe38000 len:0x1000 00:23:27.799 [2024-11-20 13:43:35.488402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev nvme passthru rw ...passed 00:23:27.799 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.489281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:23:27.799 [2024-11-20 13:43:35.489387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:27.799 passed 00:23:27.799 Test: blockdev nvme admin passthru ...passed 00:23:27.799 Test: blockdev copy ...passed 00:23:27.799 Suite: bdevio tests on: Nvme1n1 00:23:27.799 Test: blockdev write read block ...passed 00:23:27.799 Test: blockdev write zeroes read block ...passed 00:23:27.799 Test: blockdev write zeroes read no split ...passed 00:23:28.059 Test: blockdev write zeroes read split ...passed 00:23:28.059 Test: blockdev write zeroes read split partial ...passed 00:23:28.059 Test: blockdev reset ...[2024-11-20 13:43:35.573301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:23:28.059 [2024-11-20 13:43:35.577244] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:23:28.059 passed 00:23:28.059 Test: blockdev write read 8 blocks ...passed 00:23:28.059 Test: blockdev write read size > 128k ...passed 00:23:28.059 Test: blockdev write read invalid size ...passed 00:23:28.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:28.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:28.059 Test: blockdev write read max offset ...passed 00:23:28.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:28.059 Test: blockdev writev readv 8 blocks ...passed 00:23:28.059 Test: blockdev writev readv 30 x 1block ...passed 00:23:28.059 Test: blockdev writev readv block ...passed 00:23:28.059 Test: blockdev writev readv size > 128k ...passed 00:23:28.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:28.059 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.585562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bfe34000 len:0x1000 00:23:28.059 passed 00:23:28.059 Test: blockdev nvme passthru rw ...passed 00:23:28.059 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.585700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:23:28.059 passed 00:23:28.059 Test: blockdev nvme admin passthru ...[2024-11-20 13:43:35.586319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:23:28.059 [2024-11-20 13:43:35.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:23:28.059 passed 00:23:28.059 Test: blockdev copy ...passed 00:23:28.059 Suite: bdevio tests on: Nvme0n1 00:23:28.059 Test: blockdev write read block ...passed 00:23:28.059 Test: blockdev write zeroes read block ...passed 00:23:28.059 Test: blockdev write zeroes read no split ...passed 00:23:28.059 Test: blockdev write zeroes read split ...passed 00:23:28.059 Test: blockdev write zeroes read split partial ...passed 00:23:28.059 Test: blockdev reset ...[2024-11-20 13:43:35.672192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:23:28.059 [2024-11-20 13:43:35.675745] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:23:28.059 passed 00:23:28.059 Test: blockdev write read 8 blocks ...passed 00:23:28.059 Test: blockdev write read size > 128k ...passed 00:23:28.059 Test: blockdev write read invalid size ...passed 00:23:28.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:28.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:28.059 Test: blockdev write read max offset ...passed 00:23:28.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:28.059 Test: blockdev writev readv 8 blocks ...passed 00:23:28.059 Test: blockdev writev readv 30 x 1block ...passed 00:23:28.059 Test: blockdev writev readv block ...passed 00:23:28.059 Test: blockdev writev readv size > 128k ...passed 00:23:28.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:28.059 Test: blockdev comparev and writev ...[2024-11-20 13:43:35.683111] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:23:28.059 separate metadata which is not supported yet. 00:23:28.059 passed 00:23:28.059 Test: blockdev nvme passthru rw ...passed 00:23:28.059 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:43:35.683738] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:23:28.059 [2024-11-20 13:43:35.683861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:23:28.059 passed 00:23:28.059 Test: blockdev nvme admin passthru ...passed 00:23:28.059 Test: blockdev copy ...passed 00:23:28.059 00:23:28.059 Run Summary: Type Total Ran Passed Failed Inactive 00:23:28.059 suites 6 6 n/a 0 0 00:23:28.059 tests 138 138 138 0 0 00:23:28.059 asserts 893 893 893 0 n/a 00:23:28.059 00:23:28.059 Elapsed time = 1.717 seconds 00:23:28.059 0 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61465 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61465 ']' 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61465 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61465 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61465' 00:23:28.059 killing process with pid 61465 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61465 00:23:28.059 13:43:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61465 00:23:29.440 13:43:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:23:29.440 00:23:29.440 real 0m3.121s 00:23:29.440 user 0m8.082s 00:23:29.440 sys 0m0.420s 00:23:29.440 13:43:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:29.440 13:43:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 ************************************ 00:23:29.440 END TEST bdev_bounds 00:23:29.440 ************************************ 00:23:29.440 13:43:36 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:23:29.440 13:43:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:23:29.440 13:43:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:29.440 13:43:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 ************************************ 00:23:29.440 START TEST bdev_nbd 00:23:29.440 ************************************ 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61530 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61530 /var/tmp/spdk-nbd.sock 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61530 ']' 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:23:29.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:29.440 13:43:36 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:29.440 [2024-11-20 13:43:37.083180] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:29.440 [2024-11-20 13:43:37.083370] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.700 [2024-11-20 13:43:37.244695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.700 [2024-11-20 13:43:37.371225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:30.635 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:30.893 1+0 records in 00:23:30.893 1+0 records out 00:23:30.893 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000692443 s, 5.9 MB/s 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:30.893 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:30.894 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:30.894 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:30.894 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:30.894 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.152 1+0 records in 00:23:31.152 1+0 records out 00:23:31.152 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000674287 s, 6.1 MB/s 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:31.152 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.411 1+0 records in 00:23:31.411 1+0 records out 00:23:31.411 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000693337 s, 5.9 MB/s 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:31.411 13:43:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.677 1+0 records in 00:23:31.677 1+0 records out 00:23:31.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000972828 s, 4.2 MB/s 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:31.677 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:31.945 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:31.945 1+0 records in 00:23:31.945 1+0 records out 00:23:31.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000796786 s, 5.1 MB/s 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:31.946 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:32.205 1+0 records in 00:23:32.205 1+0 records out 00:23:32.205 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000691646 s, 5.9 MB/s 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:23:32.205 13:43:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:32.464 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd0", 00:23:32.464 "bdev_name": "Nvme0n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd1", 00:23:32.464 "bdev_name": "Nvme1n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd2", 00:23:32.464 "bdev_name": "Nvme2n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd3", 00:23:32.464 "bdev_name": "Nvme2n2" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd4", 00:23:32.464 "bdev_name": "Nvme2n3" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd5", 00:23:32.464 "bdev_name": "Nvme3n1" 00:23:32.464 } 00:23:32.464 ]' 00:23:32.464 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:23:32.464 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd0", 00:23:32.464 "bdev_name": "Nvme0n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd1", 00:23:32.464 "bdev_name": "Nvme1n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd2", 00:23:32.464 "bdev_name": "Nvme2n1" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd3", 00:23:32.464 "bdev_name": "Nvme2n2" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd4", 00:23:32.464 "bdev_name": "Nvme2n3" 00:23:32.464 }, 00:23:32.464 { 00:23:32.464 "nbd_device": "/dev/nbd5", 00:23:32.464 "bdev_name": "Nvme3n1" 00:23:32.464 } 00:23:32.464 ]' 00:23:32.464 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:32.724 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:32.982 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.242 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:23:33.243 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.503 13:43:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:33.762 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.022 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:34.282 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:34.282 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:34.282 13:43:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:34.542 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:23:34.542 /dev/nbd0 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:34.801 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:34.802 1+0 records in 00:23:34.802 1+0 records out 00:23:34.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747008 s, 5.5 MB/s 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:34.802 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:23:35.060 /dev/nbd1 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.060 1+0 records in 00:23:35.060 1+0 records out 00:23:35.060 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000528733 s, 7.7 MB/s 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:35.060 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:23:35.319 /dev/nbd10 00:23:35.319 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:23:35.319 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:23:35.319 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.320 1+0 records in 00:23:35.320 1+0 records out 00:23:35.320 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000783009 s, 5.2 MB/s 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:35.320 13:43:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:23:35.578 /dev/nbd11 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.578 1+0 records in 00:23:35.578 1+0 records out 00:23:35.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000717064 s, 5.7 MB/s 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:35.578 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:23:35.835 /dev/nbd12 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:35.835 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:35.836 1+0 records in 00:23:35.836 1+0 records out 00:23:35.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00067893 s, 6.0 MB/s 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:35.836 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:23:36.094 /dev/nbd13 00:23:36.094 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:23:36.094 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:23:36.094 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:23:36.094 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:23:36.095 1+0 records in 00:23:36.095 1+0 records out 00:23:36.095 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000793548 s, 5.2 MB/s 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:36.095 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:36.353 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd0", 00:23:36.353 "bdev_name": "Nvme0n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd1", 00:23:36.353 "bdev_name": "Nvme1n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd10", 00:23:36.353 "bdev_name": "Nvme2n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd11", 00:23:36.353 "bdev_name": "Nvme2n2" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd12", 00:23:36.353 "bdev_name": "Nvme2n3" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd13", 00:23:36.353 "bdev_name": "Nvme3n1" 00:23:36.353 } 00:23:36.353 ]' 00:23:36.353 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd0", 00:23:36.353 "bdev_name": "Nvme0n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd1", 00:23:36.353 "bdev_name": "Nvme1n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd10", 00:23:36.353 "bdev_name": "Nvme2n1" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd11", 00:23:36.353 "bdev_name": "Nvme2n2" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd12", 00:23:36.353 "bdev_name": "Nvme2n3" 00:23:36.353 }, 00:23:36.353 { 00:23:36.353 "nbd_device": "/dev/nbd13", 00:23:36.353 "bdev_name": "Nvme3n1" 00:23:36.353 } 00:23:36.353 ]' 00:23:36.353 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:36.353 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:23:36.353 /dev/nbd1 00:23:36.353 /dev/nbd10 00:23:36.353 /dev/nbd11 00:23:36.353 /dev/nbd12 00:23:36.353 /dev/nbd13' 00:23:36.353 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:23:36.353 /dev/nbd1 00:23:36.353 /dev/nbd10 00:23:36.353 /dev/nbd11 00:23:36.353 /dev/nbd12 00:23:36.353 /dev/nbd13' 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:23:36.354 256+0 records in 00:23:36.354 256+0 records out 00:23:36.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00611102 s, 172 MB/s 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.354 13:43:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:23:36.354 256+0 records in 00:23:36.354 256+0 records out 00:23:36.354 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.09519 s, 11.0 MB/s 00:23:36.354 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.354 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:23:36.613 256+0 records in 00:23:36.613 256+0 records out 00:23:36.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0923675 s, 11.4 MB/s 00:23:36.613 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.613 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:23:36.613 256+0 records in 00:23:36.613 256+0 records out 00:23:36.613 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0927779 s, 11.3 MB/s 00:23:36.613 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.613 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:23:36.872 256+0 records in 00:23:36.872 256+0 records out 00:23:36.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0956944 s, 11.0 MB/s 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:23:36.872 256+0 records in 00:23:36.872 256+0 records out 00:23:36.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0922278 s, 11.4 MB/s 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:23:36.872 256+0 records in 00:23:36.872 256+0 records out 00:23:36.872 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0911229 s, 11.5 MB/s 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.872 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:23:36.873 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:36.873 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:37.134 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.395 13:43:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.395 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.654 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:37.912 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:38.172 13:43:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:38.430 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:23:38.690 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:23:38.691 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:23:38.950 malloc_lvol_verify 00:23:38.950 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:23:39.209 9a210d2c-c5f0-442a-8f2e-a05a5c613590 00:23:39.209 13:43:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:23:39.467 f9f31cca-8b40-4d58-ad36-440b5af3e6aa 00:23:39.467 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:23:39.726 /dev/nbd0 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:23:39.726 mke2fs 1.47.0 (5-Feb-2023) 00:23:39.726 Discarding device blocks: 0/4096 done 00:23:39.726 Creating filesystem with 4096 1k blocks and 1024 inodes 00:23:39.726 00:23:39.726 Allocating group tables: 0/1 done 00:23:39.726 Writing inode tables: 0/1 done 00:23:39.726 Creating journal (1024 blocks): done 00:23:39.726 Writing superblocks and filesystem accounting information: 0/1 done 00:23:39.726 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:23:39.726 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61530 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61530 ']' 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61530 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61530 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.985 killing process with pid 61530 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61530' 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61530 00:23:39.985 13:43:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61530 00:23:41.893 13:43:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:23:41.893 00:23:41.893 real 0m12.124s 00:23:41.893 user 0m16.605s 00:23:41.893 sys 0m4.347s 00:23:41.893 13:43:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:41.893 13:43:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:23:41.893 ************************************ 00:23:41.893 END TEST bdev_nbd 00:23:41.893 ************************************ 00:23:41.893 13:43:49 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:23:41.893 13:43:49 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:23:41.893 skipping fio tests on NVMe due to multi-ns failures. 00:23:41.893 13:43:49 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:23:41.893 13:43:49 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:41.893 13:43:49 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:41.893 13:43:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:41.893 13:43:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.893 13:43:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:41.893 ************************************ 00:23:41.893 START TEST bdev_verify 00:23:41.893 ************************************ 00:23:41.893 13:43:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:23:41.893 [2024-11-20 13:43:49.267576] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:41.893 [2024-11-20 13:43:49.267757] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61930 ] 00:23:41.893 [2024-11-20 13:43:49.452892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:41.893 [2024-11-20 13:43:49.590454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.893 [2024-11-20 13:43:49.590491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:42.832 Running I/O for 5 seconds... 00:23:45.158 19136.00 IOPS, 74.75 MiB/s [2024-11-20T13:43:53.829Z] 20096.00 IOPS, 78.50 MiB/s [2024-11-20T13:43:54.764Z] 19925.33 IOPS, 77.83 MiB/s [2024-11-20T13:43:55.700Z] 19264.00 IOPS, 75.25 MiB/s [2024-11-20T13:43:55.700Z] 19046.40 IOPS, 74.40 MiB/s 00:23:47.981 Latency(us) 00:23:47.981 [2024-11-20T13:43:55.700Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.981 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0xbd0bd 00:23:47.981 Nvme0n1 : 5.10 1556.23 6.08 0.00 0.00 82044.73 17171.00 92494.48 00:23:47.981 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:23:47.981 Nvme0n1 : 5.08 1574.87 6.15 0.00 0.00 80766.70 13221.67 92494.48 00:23:47.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0xa0000 00:23:47.981 Nvme1n1 : 5.10 1555.74 6.08 0.00 0.00 81871.40 15224.96 87915.54 00:23:47.981 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0xa0000 length 0xa0000 00:23:47.981 Nvme1n1 : 5.10 1581.95 6.18 0.00 0.00 80522.41 14080.22 81505.03 00:23:47.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0x80000 00:23:47.981 Nvme2n1 : 5.10 1554.98 6.07 0.00 0.00 81716.33 15224.96 81047.14 00:23:47.981 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x80000 length 0x80000 00:23:47.981 Nvme2n1 : 5.10 1580.94 6.18 0.00 0.00 80368.15 15110.48 71431.38 00:23:47.981 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0x80000 00:23:47.981 Nvme2n2 : 5.11 1553.74 6.07 0.00 0.00 81565.03 17628.90 73720.85 00:23:47.981 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x80000 length 0x80000 00:23:47.981 Nvme2n2 : 5.10 1580.16 6.17 0.00 0.00 80218.47 16140.74 68684.02 00:23:47.981 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0x80000 00:23:47.981 Nvme2n3 : 5.11 1553.11 6.07 0.00 0.00 81385.34 18315.74 72805.06 00:23:47.981 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x80000 length 0x80000 00:23:47.981 Nvme2n3 : 5.11 1579.43 6.17 0.00 0.00 80046.94 16598.64 70973.48 00:23:47.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x0 length 0x20000 00:23:47.981 Nvme3n1 : 5.11 1552.66 6.07 0.00 0.00 81214.67 17285.48 74636.63 00:23:47.981 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.981 Verification LBA range: start 0x20000 length 0x20000 00:23:47.981 Nvme3n1 : 5.11 1578.96 6.17 0.00 0.00 79881.73 16255.22 74178.74 00:23:47.981 [2024-11-20T13:43:55.700Z] =================================================================================================================== 00:23:47.981 [2024-11-20T13:43:55.700Z] Total : 18802.78 73.45 0.00 0.00 80961.63 13221.67 92494.48 00:23:49.888 00:23:49.888 real 0m8.147s 00:23:49.888 user 0m15.028s 00:23:49.888 sys 0m0.329s 00:23:49.888 13:43:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:49.888 13:43:57 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:23:49.888 ************************************ 00:23:49.888 END TEST bdev_verify 00:23:49.888 ************************************ 00:23:49.888 13:43:57 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:49.888 13:43:57 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:23:49.888 13:43:57 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:49.888 13:43:57 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:49.888 ************************************ 00:23:49.888 START TEST bdev_verify_big_io 00:23:49.888 ************************************ 00:23:49.888 13:43:57 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:23:49.888 [2024-11-20 13:43:57.470054] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:49.888 [2024-11-20 13:43:57.470191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62034 ] 00:23:50.145 [2024-11-20 13:43:57.654377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:50.145 [2024-11-20 13:43:57.789549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.145 [2024-11-20 13:43:57.789585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:51.082 Running I/O for 5 seconds... 00:23:56.287 1419.00 IOPS, 88.69 MiB/s [2024-11-20T13:44:04.574Z] 2613.00 IOPS, 163.31 MiB/s [2024-11-20T13:44:04.834Z] 2705.67 IOPS, 169.10 MiB/s 00:23:57.115 Latency(us) 00:23:57.115 [2024-11-20T13:44:04.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:57.115 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0xbd0b 00:23:57.115 Nvme0n1 : 5.69 131.46 8.22 0.00 0.00 927802.01 49910.39 945092.08 00:23:57.115 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0xbd0b length 0xbd0b 00:23:57.115 Nvme0n1 : 5.68 130.19 8.14 0.00 0.00 927686.95 25069.67 959744.67 00:23:57.115 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0xa000 00:23:57.115 Nvme1n1 : 5.69 134.88 8.43 0.00 0.00 892043.22 119052.30 853513.39 00:23:57.115 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0xa000 length 0xa000 00:23:57.115 Nvme1n1 : 5.69 135.05 8.44 0.00 0.00 886757.77 105315.49 875492.28 00:23:57.115 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0x8000 00:23:57.115 Nvme2n1 : 5.70 134.82 8.43 0.00 0.00 866790.51 130957.53 857176.54 00:23:57.115 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x8000 length 0x8000 00:23:57.115 Nvme2n1 : 5.74 137.27 8.58 0.00 0.00 846441.61 50826.17 879155.42 00:23:57.115 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0x8000 00:23:57.115 Nvme2n2 : 5.82 142.83 8.93 0.00 0.00 804420.85 32968.33 915786.90 00:23:57.115 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x8000 length 0x8000 00:23:57.115 Nvme2n2 : 5.82 136.50 8.53 0.00 0.00 830364.14 42355.14 1736331.96 00:23:57.115 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0x8000 00:23:57.115 Nvme2n3 : 5.84 149.60 9.35 0.00 0.00 751640.69 10302.60 937765.79 00:23:57.115 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x8000 length 0x8000 00:23:57.115 Nvme2n3 : 5.85 139.77 8.74 0.00 0.00 786998.20 31823.59 1765637.14 00:23:57.115 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x0 length 0x2000 00:23:57.115 Nvme3n1 : 5.84 153.83 9.61 0.00 0.00 710168.22 3462.82 1062312.80 00:23:57.115 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:23:57.115 Verification LBA range: start 0x2000 length 0x2000 00:23:57.115 Nvme3n1 : 5.89 170.87 10.68 0.00 0.00 631435.66 1144.73 1802268.62 00:23:57.115 [2024-11-20T13:44:04.834Z] =================================================================================================================== 00:23:57.115 [2024-11-20T13:44:04.834Z] Total : 1697.06 106.07 0.00 0.00 814258.78 1144.73 1802268.62 00:23:59.648 00:23:59.648 real 0m9.556s 00:23:59.648 user 0m17.798s 00:23:59.648 sys 0m0.355s 00:23:59.648 13:44:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:59.648 13:44:06 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:23:59.648 ************************************ 00:23:59.648 END TEST bdev_verify_big_io 00:23:59.648 ************************************ 00:23:59.648 13:44:06 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:59.648 13:44:06 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:23:59.648 13:44:06 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:59.648 13:44:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:23:59.648 ************************************ 00:23:59.648 START TEST bdev_write_zeroes 00:23:59.648 ************************************ 00:23:59.648 13:44:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:23:59.648 [2024-11-20 13:44:07.069789] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:23:59.648 [2024-11-20 13:44:07.069931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62154 ] 00:23:59.648 [2024-11-20 13:44:07.253801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.906 [2024-11-20 13:44:07.414675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.474 Running I/O for 1 seconds... 00:24:01.849 58368.00 IOPS, 228.00 MiB/s 00:24:01.849 Latency(us) 00:24:01.849 [2024-11-20T13:44:09.568Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.849 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme0n1 : 1.02 9701.02 37.89 0.00 0.00 13148.35 10817.73 28847.29 00:24:01.849 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme1n1 : 1.02 9686.81 37.84 0.00 0.00 13148.64 10817.73 29534.13 00:24:01.849 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme2n1 : 1.03 9673.65 37.79 0.00 0.00 13111.63 10703.26 27130.19 00:24:01.849 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme2n2 : 1.03 9706.61 37.92 0.00 0.00 13019.92 8814.45 22207.83 00:24:01.849 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme2n3 : 1.03 9693.71 37.87 0.00 0.00 12998.13 9043.40 22551.25 00:24:01.849 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:01.849 Nvme3n1 : 1.03 9679.16 37.81 0.00 0.00 12980.29 9157.87 24039.41 00:24:01.849 [2024-11-20T13:44:09.568Z] =================================================================================================================== 00:24:01.849 [2024-11-20T13:44:09.568Z] Total : 58140.96 227.11 0.00 0.00 13067.61 8814.45 29534.13 00:24:02.803 00:24:02.803 real 0m3.540s 00:24:02.803 user 0m3.155s 00:24:02.803 sys 0m0.264s 00:24:02.803 13:44:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:02.803 13:44:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:24:02.803 ************************************ 00:24:02.803 END TEST bdev_write_zeroes 00:24:02.803 ************************************ 00:24:03.062 13:44:10 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.062 13:44:10 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:03.062 13:44:10 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.062 13:44:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:03.062 ************************************ 00:24:03.062 START TEST bdev_json_nonenclosed 00:24:03.062 ************************************ 00:24:03.062 13:44:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.062 [2024-11-20 13:44:10.667042] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:03.062 [2024-11-20 13:44:10.667539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62218 ] 00:24:03.321 [2024-11-20 13:44:10.850220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:03.321 [2024-11-20 13:44:10.989255] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.321 [2024-11-20 13:44:10.989387] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:24:03.321 [2024-11-20 13:44:10.989408] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:03.321 [2024-11-20 13:44:10.989420] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:03.580 00:24:03.580 real 0m0.698s 00:24:03.580 user 0m0.464s 00:24:03.580 sys 0m0.128s 00:24:03.580 13:44:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.580 13:44:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:24:03.580 ************************************ 00:24:03.580 END TEST bdev_json_nonenclosed 00:24:03.580 ************************************ 00:24:03.840 13:44:11 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.840 13:44:11 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:03.840 13:44:11 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.840 13:44:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:03.840 ************************************ 00:24:03.840 START TEST bdev_json_nonarray 00:24:03.840 ************************************ 00:24:03.840 13:44:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:03.840 [2024-11-20 13:44:11.451409] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:03.840 [2024-11-20 13:44:11.451534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62238 ] 00:24:04.098 [2024-11-20 13:44:11.630999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.098 [2024-11-20 13:44:11.761408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:04.098 [2024-11-20 13:44:11.761539] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:24:04.098 [2024-11-20 13:44:11.761561] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:24:04.098 [2024-11-20 13:44:11.761572] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:04.358 00:24:04.358 real 0m0.702s 00:24:04.358 user 0m0.464s 00:24:04.358 sys 0m0.133s 00:24:04.358 13:44:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.358 13:44:12 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:24:04.358 ************************************ 00:24:04.358 END TEST bdev_json_nonarray 00:24:04.358 ************************************ 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:24:04.617 13:44:12 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:24:04.617 00:24:04.617 real 0m45.669s 00:24:04.617 user 1m8.689s 00:24:04.617 sys 0m7.305s 00:24:04.617 13:44:12 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:04.617 13:44:12 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 ************************************ 00:24:04.617 END TEST blockdev_nvme 00:24:04.617 ************************************ 00:24:04.617 13:44:12 -- spdk/autotest.sh@209 -- # uname -s 00:24:04.617 13:44:12 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:24:04.617 13:44:12 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:24:04.617 13:44:12 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:04.617 13:44:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:04.617 13:44:12 -- common/autotest_common.sh@10 -- # set +x 00:24:04.617 ************************************ 00:24:04.617 START TEST blockdev_nvme_gpt 00:24:04.617 ************************************ 00:24:04.617 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:24:04.617 * Looking for test storage... 00:24:04.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:24:04.617 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:04.617 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lcov --version 00:24:04.617 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:04.876 13:44:12 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.876 --rc genhtml_branch_coverage=1 00:24:04.876 --rc genhtml_function_coverage=1 00:24:04.876 --rc genhtml_legend=1 00:24:04.876 --rc geninfo_all_blocks=1 00:24:04.876 --rc geninfo_unexecuted_blocks=1 00:24:04.876 00:24:04.876 ' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.876 --rc genhtml_branch_coverage=1 00:24:04.876 --rc genhtml_function_coverage=1 00:24:04.876 --rc genhtml_legend=1 00:24:04.876 --rc geninfo_all_blocks=1 00:24:04.876 --rc geninfo_unexecuted_blocks=1 00:24:04.876 00:24:04.876 ' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.876 --rc genhtml_branch_coverage=1 00:24:04.876 --rc genhtml_function_coverage=1 00:24:04.876 --rc genhtml_legend=1 00:24:04.876 --rc geninfo_all_blocks=1 00:24:04.876 --rc geninfo_unexecuted_blocks=1 00:24:04.876 00:24:04.876 ' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:04.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:04.876 --rc genhtml_branch_coverage=1 00:24:04.876 --rc genhtml_function_coverage=1 00:24:04.876 --rc genhtml_legend=1 00:24:04.876 --rc geninfo_all_blocks=1 00:24:04.876 --rc geninfo_unexecuted_blocks=1 00:24:04.876 00:24:04.876 ' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:24:04.876 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62328 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62328 00:24:04.877 13:44:12 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62328 ']' 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.877 13:44:12 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:04.877 [2024-11-20 13:44:12.548634] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:04.877 [2024-11-20 13:44:12.548809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62328 ] 00:24:05.135 [2024-11-20 13:44:12.717233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.395 [2024-11-20 13:44:12.854109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.331 13:44:13 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.331 13:44:13 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:24:06.331 13:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:24:06.331 13:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:24:06.331 13:44:13 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:06.896 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:07.177 Waiting for block devices as requested 00:24:07.177 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:07.177 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:07.177 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:24:07.437 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:24:12.731 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:24:12.731 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local nvme bdf 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n2 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.731 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n3 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3c3n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:24:12.732 BYT; 00:24:12.732 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:24:12.732 BYT; 00:24:12.732 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:12.732 13:44:20 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:24:12.732 13:44:20 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:24:13.670 The operation has completed successfully. 00:24:13.670 13:44:21 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:24:14.609 The operation has completed successfully. 00:24:14.609 13:44:22 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:15.177 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:16.120 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:16.120 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:16.120 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:24:16.120 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:24:16.120 13:44:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.120 13:44:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.120 [] 00:24:16.120 13:44:23 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:24:16.120 13:44:23 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:24:16.120 13:44:23 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.120 13:44:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.379 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.380 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:24:16.380 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.380 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.380 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.380 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:24:16.380 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:24:16.380 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.380 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:16.640 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:24:16.640 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:24:16.641 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ffc3e2c1-c24b-48dc-b534-84dafe02e7fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ffc3e2c1-c24b-48dc-b534-84dafe02e7fb",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "dba08f7f-c7b2-4e4f-999d-6f009513b5fb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dba08f7f-c7b2-4e4f-999d-6f009513b5fb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f38dce63-2dca-4d3c-bd87-d26796eaf778"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f38dce63-2dca-4d3c-bd87-d26796eaf778",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "4ea77e8e-57d7-4966-b13f-c051552b7d2e"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4ea77e8e-57d7-4966-b13f-c051552b7d2e",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0cb80240-ae50-4e99-9706-247273909b76"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0cb80240-ae50-4e99-9706-247273909b76",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:24:16.641 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:24:16.641 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:24:16.641 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:24:16.641 13:44:24 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 62328 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62328 ']' 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62328 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62328 00:24:16.641 killing process with pid 62328 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62328' 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62328 00:24:16.641 13:44:24 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62328 00:24:19.927 13:44:26 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:19.927 13:44:26 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:19.927 13:44:26 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:24:19.927 13:44:26 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:19.927 13:44:26 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:19.927 ************************************ 00:24:19.927 START TEST bdev_hello_world 00:24:19.927 ************************************ 00:24:19.927 13:44:26 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:24:19.927 [2024-11-20 13:44:27.065810] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:19.927 [2024-11-20 13:44:27.065948] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62975 ] 00:24:19.927 [2024-11-20 13:44:27.244479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.927 [2024-11-20 13:44:27.375368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:20.528 [2024-11-20 13:44:28.074833] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:24:20.528 [2024-11-20 13:44:28.074887] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:24:20.528 [2024-11-20 13:44:28.074915] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:24:20.528 [2024-11-20 13:44:28.078069] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:24:20.528 [2024-11-20 13:44:28.078655] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:24:20.528 [2024-11-20 13:44:28.078686] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:24:20.528 [2024-11-20 13:44:28.078925] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:24:20.528 00:24:20.528 [2024-11-20 13:44:28.078962] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:24:21.908 00:24:21.908 real 0m2.316s 00:24:21.908 user 0m1.947s 00:24:21.908 sys 0m0.261s 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:24:21.908 ************************************ 00:24:21.908 END TEST bdev_hello_world 00:24:21.908 ************************************ 00:24:21.908 13:44:29 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:24:21.908 13:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:21.908 13:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:21.908 13:44:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:21.908 ************************************ 00:24:21.908 START TEST bdev_bounds 00:24:21.908 ************************************ 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63017 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63017' 00:24:21.908 Process bdevio pid: 63017 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63017 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63017 ']' 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.908 13:44:29 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:21.908 [2024-11-20 13:44:29.452890] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:21.908 [2024-11-20 13:44:29.453047] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63017 ] 00:24:22.166 [2024-11-20 13:44:29.630988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:22.166 [2024-11-20 13:44:29.782471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.166 [2024-11-20 13:44:29.782590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.166 [2024-11-20 13:44:29.782611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.103 13:44:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:23.103 13:44:30 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:24:23.103 13:44:30 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:24:23.103 I/O targets: 00:24:23.103 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:24:23.103 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:24:23.103 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:24:23.103 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:23.103 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:23.103 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:24:23.103 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:24:23.103 00:24:23.103 00:24:23.103 CUnit - A unit testing framework for C - Version 2.1-3 00:24:23.103 http://cunit.sourceforge.net/ 00:24:23.103 00:24:23.103 00:24:23.103 Suite: bdevio tests on: Nvme3n1 00:24:23.103 Test: blockdev write read block ...passed 00:24:23.103 Test: blockdev write zeroes read block ...passed 00:24:23.103 Test: blockdev write zeroes read no split ...passed 00:24:23.103 Test: blockdev write zeroes read split ...passed 00:24:23.103 Test: blockdev write zeroes read split partial ...passed 00:24:23.103 Test: blockdev reset ...[2024-11-20 13:44:30.691728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:24:23.103 [2024-11-20 13:44:30.695784] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:24:23.103 passed 00:24:23.103 Test: blockdev write read 8 blocks ...passed 00:24:23.103 Test: blockdev write read size > 128k ...passed 00:24:23.103 Test: blockdev write read invalid size ...passed 00:24:23.103 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.103 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.103 Test: blockdev write read max offset ...passed 00:24:23.103 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.103 Test: blockdev writev readv 8 blocks ...passed 00:24:23.103 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.103 Test: blockdev writev readv block ...passed 00:24:23.103 Test: blockdev writev readv size > 128k ...passed 00:24:23.103 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.104 Test: blockdev comparev and writev ...[2024-11-20 13:44:30.705059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad604000 len:0x1000 00:24:23.104 [2024-11-20 13:44:30.705114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.104 passed 00:24:23.104 Test: blockdev nvme passthru rw ...passed 00:24:23.104 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:44:30.705759] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:23.104 [2024-11-20 13:44:30.705786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:23.104 passed 00:24:23.104 Test: blockdev nvme admin passthru ...passed 00:24:23.104 Test: blockdev copy ...passed 00:24:23.104 Suite: bdevio tests on: Nvme2n3 00:24:23.104 Test: blockdev write read block ...passed 00:24:23.104 Test: blockdev write zeroes read block ...passed 00:24:23.104 Test: blockdev write zeroes read no split ...passed 00:24:23.104 Test: blockdev write zeroes read split ...passed 00:24:23.104 Test: blockdev write zeroes read split partial ...passed 00:24:23.104 Test: blockdev reset ...[2024-11-20 13:44:30.788215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:23.104 [2024-11-20 13:44:30.792850] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:23.104 passed 00:24:23.104 Test: blockdev write read 8 blocks ...passed 00:24:23.104 Test: blockdev write read size > 128k ...passed 00:24:23.104 Test: blockdev write read invalid size ...passed 00:24:23.104 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.104 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.104 Test: blockdev write read max offset ...passed 00:24:23.104 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.104 Test: blockdev writev readv 8 blocks ...passed 00:24:23.104 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.104 Test: blockdev writev readv block ...passed 00:24:23.104 Test: blockdev writev readv size > 128k ...passed 00:24:23.104 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.104 Test: blockdev comparev and writev ...[2024-11-20 13:44:30.801864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ad602000 len:0x1000 00:24:23.104 [2024-11-20 13:44:30.801915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.104 passed 00:24:23.104 Test: blockdev nvme passthru rw ...passed 00:24:23.104 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:44:30.802531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:23.104 [2024-11-20 13:44:30.802568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:23.104 passed 00:24:23.104 Test: blockdev nvme admin passthru ...passed 00:24:23.104 Test: blockdev copy ...passed 00:24:23.104 Suite: bdevio tests on: Nvme2n2 00:24:23.104 Test: blockdev write read block ...passed 00:24:23.104 Test: blockdev write zeroes read block ...passed 00:24:23.104 Test: blockdev write zeroes read no split ...passed 00:24:23.363 Test: blockdev write zeroes read split ...passed 00:24:23.363 Test: blockdev write zeroes read split partial ...passed 00:24:23.363 Test: blockdev reset ...[2024-11-20 13:44:30.893219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:23.363 passed 00:24:23.363 Test: blockdev write read 8 blocks ...[2024-11-20 13:44:30.897781] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:23.363 passed 00:24:23.363 Test: blockdev write read size > 128k ...passed 00:24:23.363 Test: blockdev write read invalid size ...passed 00:24:23.363 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.363 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.363 Test: blockdev write read max offset ...passed 00:24:23.363 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.363 Test: blockdev writev readv 8 blocks ...passed 00:24:23.363 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.363 Test: blockdev writev readv block ...passed 00:24:23.363 Test: blockdev writev readv size > 128k ...passed 00:24:23.363 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.363 Test: blockdev comparev and writev ...[2024-11-20 13:44:30.906282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1438000 len:0x1000 00:24:23.363 [2024-11-20 13:44:30.906458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.363 passed 00:24:23.363 Test: blockdev nvme passthru rw ...passed 00:24:23.364 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:44:30.907236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:23.364 [2024-11-20 13:44:30.907274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:23.364 passed 00:24:23.364 Test: blockdev nvme admin passthru ...passed 00:24:23.364 Test: blockdev copy ...passed 00:24:23.364 Suite: bdevio tests on: Nvme2n1 00:24:23.364 Test: blockdev write read block ...passed 00:24:23.364 Test: blockdev write zeroes read block ...passed 00:24:23.364 Test: blockdev write zeroes read no split ...passed 00:24:23.364 Test: blockdev write zeroes read split ...passed 00:24:23.364 Test: blockdev write zeroes read split partial ...passed 00:24:23.364 Test: blockdev reset ...[2024-11-20 13:44:30.989851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:24:23.364 [2024-11-20 13:44:30.994067] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:24:23.364 passed 00:24:23.364 Test: blockdev write read 8 blocks ...passed 00:24:23.364 Test: blockdev write read size > 128k ...passed 00:24:23.364 Test: blockdev write read invalid size ...passed 00:24:23.364 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.364 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.364 Test: blockdev write read max offset ...passed 00:24:23.364 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.364 Test: blockdev writev readv 8 blocks ...passed 00:24:23.364 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.364 Test: blockdev writev readv block ...passed 00:24:23.364 Test: blockdev writev readv size > 128k ...passed 00:24:23.364 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.364 Test: blockdev comparev and writev ...[2024-11-20 13:44:31.002092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1434000 len:0x1000 00:24:23.364 [2024-11-20 13:44:31.002144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.364 passed 00:24:23.364 Test: blockdev nvme passthru rw ...passed 00:24:23.364 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:44:31.002891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:24:23.364 [2024-11-20 13:44:31.002924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:24:23.364 passed 00:24:23.364 Test: blockdev nvme admin passthru ...passed 00:24:23.364 Test: blockdev copy ...passed 00:24:23.364 Suite: bdevio tests on: Nvme1n1p2 00:24:23.364 Test: blockdev write read block ...passed 00:24:23.364 Test: blockdev write zeroes read block ...passed 00:24:23.364 Test: blockdev write zeroes read no split ...passed 00:24:23.364 Test: blockdev write zeroes read split ...passed 00:24:23.622 Test: blockdev write zeroes read split partial ...passed 00:24:23.622 Test: blockdev reset ...[2024-11-20 13:44:31.086934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:24:23.622 [2024-11-20 13:44:31.090874] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:24:23.622 passed 00:24:23.622 Test: blockdev write read 8 blocks ...passed 00:24:23.622 Test: blockdev write read size > 128k ...passed 00:24:23.622 Test: blockdev write read invalid size ...passed 00:24:23.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.622 Test: blockdev write read max offset ...passed 00:24:23.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.622 Test: blockdev writev readv 8 blocks ...passed 00:24:23.622 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.622 Test: blockdev writev readv block ...passed 00:24:23.622 Test: blockdev writev readv size > 128k ...passed 00:24:23.622 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.622 Test: blockdev comparev and writev ...[2024-11-20 13:44:31.099302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c1430000 len:0x1000 00:24:23.622 [2024-11-20 13:44:31.099350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.622 passed 00:24:23.622 Test: blockdev nvme passthru rw ...passed 00:24:23.622 Test: blockdev nvme passthru vendor specific ...passed 00:24:23.622 Test: blockdev nvme admin passthru ...passed 00:24:23.622 Test: blockdev copy ...passed 00:24:23.622 Suite: bdevio tests on: Nvme1n1p1 00:24:23.622 Test: blockdev write read block ...passed 00:24:23.622 Test: blockdev write zeroes read block ...passed 00:24:23.622 Test: blockdev write zeroes read no split ...passed 00:24:23.622 Test: blockdev write zeroes read split ...passed 00:24:23.622 Test: blockdev write zeroes read split partial ...passed 00:24:23.622 Test: blockdev reset ...[2024-11-20 13:44:31.172656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:24:23.622 passed 00:24:23.622 Test: blockdev write read 8 blocks ...[2024-11-20 13:44:31.176665] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:24:23.622 passed 00:24:23.622 Test: blockdev write read size > 128k ...passed 00:24:23.622 Test: blockdev write read invalid size ...passed 00:24:23.622 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.622 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.622 Test: blockdev write read max offset ...passed 00:24:23.622 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.622 Test: blockdev writev readv 8 blocks ...passed 00:24:23.623 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.623 Test: blockdev writev readv block ...passed 00:24:23.623 Test: blockdev writev readv size > 128k ...passed 00:24:23.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.623 Test: blockdev comparev and writev ...[2024-11-20 13:44:31.184803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2ad80e000 len:0x1000 00:24:23.623 [2024-11-20 13:44:31.184848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:24:23.623 passed 00:24:23.623 Test: blockdev nvme passthru rw ...passed 00:24:23.623 Test: blockdev nvme passthru vendor specific ...passed 00:24:23.623 Test: blockdev nvme admin passthru ...passed 00:24:23.623 Test: blockdev copy ...passed 00:24:23.623 Suite: bdevio tests on: Nvme0n1 00:24:23.623 Test: blockdev write read block ...passed 00:24:23.623 Test: blockdev write zeroes read block ...passed 00:24:23.623 Test: blockdev write zeroes read no split ...passed 00:24:23.623 Test: blockdev write zeroes read split ...passed 00:24:23.623 Test: blockdev write zeroes read split partial ...passed 00:24:23.623 Test: blockdev reset ...[2024-11-20 13:44:31.255563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:24:23.623 passed 00:24:23.623 Test: blockdev write read 8 blocks ...[2024-11-20 13:44:31.259569] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:24:23.623 passed 00:24:23.623 Test: blockdev write read size > 128k ...passed 00:24:23.623 Test: blockdev write read invalid size ...passed 00:24:23.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:23.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:23.623 Test: blockdev write read max offset ...passed 00:24:23.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:23.623 Test: blockdev writev readv 8 blocks ...passed 00:24:23.623 Test: blockdev writev readv 30 x 1block ...passed 00:24:23.623 Test: blockdev writev readv block ...passed 00:24:23.623 Test: blockdev writev readv size > 128k ...passed 00:24:23.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:23.623 Test: blockdev comparev and writev ...[2024-11-20 13:44:31.266246] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:24:23.623 separate metadata which is not supported yet. 00:24:23.623 passed 00:24:23.623 Test: blockdev nvme passthru rw ...passed 00:24:23.623 Test: blockdev nvme passthru vendor specific ...[2024-11-20 13:44:31.266645] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:24:23.623 [2024-11-20 13:44:31.266689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:24:23.623 passed 00:24:23.623 Test: blockdev nvme admin passthru ...passed 00:24:23.623 Test: blockdev copy ...passed 00:24:23.623 00:24:23.623 Run Summary: Type Total Ran Passed Failed Inactive 00:24:23.623 suites 7 7 n/a 0 0 00:24:23.623 tests 161 161 161 0 0 00:24:23.623 asserts 1025 1025 1025 0 n/a 00:24:23.623 00:24:23.623 Elapsed time = 1.806 seconds 00:24:23.623 0 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63017 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63017 ']' 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63017 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:23.623 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63017 00:24:23.880 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:23.880 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:23.880 killing process with pid 63017 00:24:23.880 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63017' 00:24:23.880 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63017 00:24:23.880 13:44:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63017 00:24:24.813 13:44:32 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:24:24.813 00:24:24.813 real 0m3.086s 00:24:24.813 user 0m7.935s 00:24:24.813 sys 0m0.417s 00:24:24.813 13:44:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:24.813 13:44:32 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:24:24.813 ************************************ 00:24:24.813 END TEST bdev_bounds 00:24:24.813 ************************************ 00:24:24.813 13:44:32 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:24:24.813 13:44:32 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:24:24.813 13:44:32 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:24.813 13:44:32 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:24.813 ************************************ 00:24:24.813 START TEST bdev_nbd 00:24:24.814 ************************************ 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63082 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63082 /var/tmp/spdk-nbd.sock 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63082 ']' 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:24:24.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:24.814 13:44:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:25.072 [2024-11-20 13:44:32.616157] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:25.072 [2024-11-20 13:44:32.616375] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:25.072 [2024-11-20 13:44:32.777195] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.331 [2024-11-20 13:44:32.898924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.267 1+0 records in 00:24:26.267 1+0 records out 00:24:26.267 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514678 s, 8.0 MB/s 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:26.267 13:44:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.526 1+0 records in 00:24:26.526 1+0 records out 00:24:26.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532491 s, 7.7 MB/s 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:26.526 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:26.784 1+0 records in 00:24:26.784 1+0 records out 00:24:26.784 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000464566 s, 8.8 MB/s 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:26.784 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:26.785 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:26.785 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.055 1+0 records in 00:24:27.055 1+0 records out 00:24:27.055 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000669945 s, 6.1 MB/s 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:27.055 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.337 1+0 records in 00:24:27.337 1+0 records out 00:24:27.337 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00065874 s, 6.2 MB/s 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.337 13:44:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:27.337 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:27.337 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:27.337 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:27.337 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.601 1+0 records in 00:24:27.601 1+0 records out 00:24:27.601 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000862191 s, 4.8 MB/s 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:27.601 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:27.861 1+0 records in 00:24:27.861 1+0 records out 00:24:27.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000759894 s, 5.4 MB/s 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:24:27.861 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:28.119 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd0", 00:24:28.120 "bdev_name": "Nvme0n1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd1", 00:24:28.120 "bdev_name": "Nvme1n1p1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd2", 00:24:28.120 "bdev_name": "Nvme1n1p2" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd3", 00:24:28.120 "bdev_name": "Nvme2n1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd4", 00:24:28.120 "bdev_name": "Nvme2n2" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd5", 00:24:28.120 "bdev_name": "Nvme2n3" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd6", 00:24:28.120 "bdev_name": "Nvme3n1" 00:24:28.120 } 00:24:28.120 ]' 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd0", 00:24:28.120 "bdev_name": "Nvme0n1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd1", 00:24:28.120 "bdev_name": "Nvme1n1p1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd2", 00:24:28.120 "bdev_name": "Nvme1n1p2" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd3", 00:24:28.120 "bdev_name": "Nvme2n1" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd4", 00:24:28.120 "bdev_name": "Nvme2n2" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd5", 00:24:28.120 "bdev_name": "Nvme2n3" 00:24:28.120 }, 00:24:28.120 { 00:24:28.120 "nbd_device": "/dev/nbd6", 00:24:28.120 "bdev_name": "Nvme3n1" 00:24:28.120 } 00:24:28.120 ]' 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.120 13:44:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.378 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:28.637 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:28.637 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.638 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:28.896 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.155 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.413 13:44:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:29.672 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:29.932 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:30.191 13:44:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:24:30.450 /dev/nbd0 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:24:30.450 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:30.451 1+0 records in 00:24:30.451 1+0 records out 00:24:30.451 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742928 s, 5.5 MB/s 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:30.451 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:24:30.709 /dev/nbd1 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:30.709 1+0 records in 00:24:30.709 1+0 records out 00:24:30.709 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000801602 s, 5.1 MB/s 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:30.709 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:24:30.969 /dev/nbd10 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:30.969 1+0 records in 00:24:30.969 1+0 records out 00:24:30.969 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000601276 s, 6.8 MB/s 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:30.969 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:24:31.229 /dev/nbd11 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.229 1+0 records in 00:24:31.229 1+0 records out 00:24:31.229 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000763842 s, 5.4 MB/s 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:31.229 13:44:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:24:31.489 /dev/nbd12 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.489 1+0 records in 00:24:31.489 1+0 records out 00:24:31.489 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000948787 s, 4.3 MB/s 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:31.489 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:24:31.748 /dev/nbd13 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:31.748 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:31.748 1+0 records in 00:24:31.748 1+0 records out 00:24:31.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000733279 s, 5.6 MB/s 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:31.749 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:24:32.009 /dev/nbd14 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:24:32.009 1+0 records in 00:24:32.009 1+0 records out 00:24:32.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000532908 s, 7.7 MB/s 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:32.009 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd0", 00:24:32.269 "bdev_name": "Nvme0n1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd1", 00:24:32.269 "bdev_name": "Nvme1n1p1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd10", 00:24:32.269 "bdev_name": "Nvme1n1p2" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd11", 00:24:32.269 "bdev_name": "Nvme2n1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd12", 00:24:32.269 "bdev_name": "Nvme2n2" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd13", 00:24:32.269 "bdev_name": "Nvme2n3" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd14", 00:24:32.269 "bdev_name": "Nvme3n1" 00:24:32.269 } 00:24:32.269 ]' 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd0", 00:24:32.269 "bdev_name": "Nvme0n1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd1", 00:24:32.269 "bdev_name": "Nvme1n1p1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd10", 00:24:32.269 "bdev_name": "Nvme1n1p2" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd11", 00:24:32.269 "bdev_name": "Nvme2n1" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd12", 00:24:32.269 "bdev_name": "Nvme2n2" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd13", 00:24:32.269 "bdev_name": "Nvme2n3" 00:24:32.269 }, 00:24:32.269 { 00:24:32.269 "nbd_device": "/dev/nbd14", 00:24:32.269 "bdev_name": "Nvme3n1" 00:24:32.269 } 00:24:32.269 ]' 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:24:32.269 /dev/nbd1 00:24:32.269 /dev/nbd10 00:24:32.269 /dev/nbd11 00:24:32.269 /dev/nbd12 00:24:32.269 /dev/nbd13 00:24:32.269 /dev/nbd14' 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:24:32.269 /dev/nbd1 00:24:32.269 /dev/nbd10 00:24:32.269 /dev/nbd11 00:24:32.269 /dev/nbd12 00:24:32.269 /dev/nbd13 00:24:32.269 /dev/nbd14' 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:24:32.269 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:24:32.528 256+0 records in 00:24:32.528 256+0 records out 00:24:32.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00628131 s, 167 MB/s 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:32.528 13:44:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:24:32.528 256+0 records in 00:24:32.528 256+0 records out 00:24:32.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.10752 s, 9.8 MB/s 00:24:32.528 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:32.528 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:24:32.528 256+0 records in 00:24:32.528 256+0 records out 00:24:32.528 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116084 s, 9.0 MB/s 00:24:32.528 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:32.528 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:24:32.788 256+0 records in 00:24:32.788 256+0 records out 00:24:32.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.11518 s, 9.1 MB/s 00:24:32.788 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:32.788 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:24:32.788 256+0 records in 00:24:32.788 256+0 records out 00:24:32.788 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0976663 s, 10.7 MB/s 00:24:32.788 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:32.788 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:24:33.047 256+0 records in 00:24:33.047 256+0 records out 00:24:33.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0978198 s, 10.7 MB/s 00:24:33.047 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:33.047 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:24:33.047 256+0 records in 00:24:33.047 256+0 records out 00:24:33.047 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0984131 s, 10.7 MB/s 00:24:33.047 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:24:33.047 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:24:33.306 256+0 records in 00:24:33.306 256+0 records out 00:24:33.306 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.112906 s, 9.3 MB/s 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.306 13:44:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.565 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:33.824 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.083 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.342 13:44:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.601 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:34.860 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:24:35.120 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.121 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:24:35.380 13:44:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:24:35.639 malloc_lvol_verify 00:24:35.639 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:24:35.898 58429524-d4a4-436d-9333-3b23530e2e14 00:24:35.898 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:24:35.898 1a52d005-b7fc-4fa7-857b-a6bbbc26db43 00:24:36.156 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:24:36.156 /dev/nbd0 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:24:36.415 mke2fs 1.47.0 (5-Feb-2023) 00:24:36.415 Discarding device blocks: 0/4096 done 00:24:36.415 Creating filesystem with 4096 1k blocks and 1024 inodes 00:24:36.415 00:24:36.415 Allocating group tables: 0/1 done 00:24:36.415 Writing inode tables: 0/1 done 00:24:36.415 Creating journal (1024 blocks): done 00:24:36.415 Writing superblocks and filesystem accounting information: 0/1 done 00:24:36.415 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:24:36.415 13:44:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63082 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63082 ']' 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63082 00:24:36.415 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63082 00:24:36.674 killing process with pid 63082 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63082' 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63082 00:24:36.674 13:44:44 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63082 00:24:38.053 13:44:45 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:24:38.053 00:24:38.053 real 0m13.001s 00:24:38.053 user 0m17.705s 00:24:38.053 sys 0m4.819s 00:24:38.053 13:44:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:38.053 ************************************ 00:24:38.053 END TEST bdev_nbd 00:24:38.053 ************************************ 00:24:38.053 13:44:45 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 skipping fio tests on NVMe due to multi-ns failures. 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:38.053 13:44:45 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:38.053 13:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:38.053 13:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:38.053 13:44:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:38.053 ************************************ 00:24:38.053 START TEST bdev_verify 00:24:38.053 ************************************ 00:24:38.053 13:44:45 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:24:38.053 [2024-11-20 13:44:45.674691] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:38.053 [2024-11-20 13:44:45.674820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63517 ] 00:24:38.313 [2024-11-20 13:44:45.850900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:38.313 [2024-11-20 13:44:45.979816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:38.313 [2024-11-20 13:44:45.979848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.251 Running I/O for 5 seconds... 00:24:41.570 20416.00 IOPS, 79.75 MiB/s [2024-11-20T13:44:50.228Z] 20352.00 IOPS, 79.50 MiB/s [2024-11-20T13:44:51.165Z] 19946.67 IOPS, 77.92 MiB/s [2024-11-20T13:44:52.102Z] 19536.00 IOPS, 76.31 MiB/s 00:24:44.383 Latency(us) 00:24:44.383 [2024-11-20T13:44:52.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.383 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.383 Verification LBA range: start 0x0 length 0xbd0bd 00:24:44.383 Nvme0n1 : 5.05 1367.94 5.34 0.00 0.00 93301.37 20490.73 83336.61 00:24:44.383 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.383 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:24:44.384 Nvme0n1 : 5.08 1359.89 5.31 0.00 0.00 93911.95 21864.41 83794.50 00:24:44.384 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x4ff80 00:24:44.384 Nvme1n1p1 : 5.05 1367.44 5.34 0.00 0.00 93192.24 22322.31 80589.25 00:24:44.384 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x4ff80 length 0x4ff80 00:24:44.384 Nvme1n1p1 : 5.08 1359.33 5.31 0.00 0.00 93641.43 21635.47 75552.42 00:24:44.384 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x4ff7f 00:24:44.384 Nvme1n1p2 : 5.06 1367.02 5.34 0.00 0.00 93104.83 22093.36 81047.14 00:24:44.384 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:24:44.384 Nvme1n1p2 : 5.09 1358.92 5.31 0.00 0.00 93510.05 19117.05 76926.10 00:24:44.384 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x80000 00:24:44.384 Nvme2n1 : 5.06 1366.60 5.34 0.00 0.00 92980.28 21635.47 84252.39 00:24:44.384 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x80000 length 0x80000 00:24:44.384 Nvme2n1 : 5.09 1358.55 5.31 0.00 0.00 93361.63 18086.79 78757.67 00:24:44.384 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x80000 00:24:44.384 Nvme2n2 : 5.06 1366.16 5.34 0.00 0.00 92867.51 21864.41 85626.08 00:24:44.384 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x80000 length 0x80000 00:24:44.384 Nvme2n2 : 5.09 1358.19 5.31 0.00 0.00 93200.35 17056.53 80589.25 00:24:44.384 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x80000 00:24:44.384 Nvme2n3 : 5.07 1376.46 5.38 0.00 0.00 92065.70 3691.77 85626.08 00:24:44.384 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x80000 length 0x80000 00:24:44.384 Nvme2n3 : 5.09 1357.82 5.30 0.00 0.00 93050.41 16140.74 82420.82 00:24:44.384 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x0 length 0x20000 00:24:44.384 Nvme3n1 : 5.07 1376.03 5.38 0.00 0.00 91914.51 3605.91 84710.29 00:24:44.384 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:44.384 Verification LBA range: start 0x20000 length 0x20000 00:24:44.384 Nvme3n1 : 5.09 1357.46 5.30 0.00 0.00 92958.85 13393.38 84710.29 00:24:44.384 [2024-11-20T13:44:52.103Z] =================================================================================================================== 00:24:44.384 [2024-11-20T13:44:52.103Z] Total : 19097.81 74.60 0.00 0.00 93074.36 3605.91 85626.08 00:24:46.290 00:24:46.290 real 0m8.152s 00:24:46.290 user 0m15.100s 00:24:46.290 sys 0m0.333s 00:24:46.290 ************************************ 00:24:46.290 END TEST bdev_verify 00:24:46.290 ************************************ 00:24:46.290 13:44:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:46.290 13:44:53 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:24:46.290 13:44:53 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:46.290 13:44:53 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:24:46.290 13:44:53 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:46.290 13:44:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:46.290 ************************************ 00:24:46.290 START TEST bdev_verify_big_io 00:24:46.290 ************************************ 00:24:46.290 13:44:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:24:46.290 [2024-11-20 13:44:53.888389] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:46.290 [2024-11-20 13:44:53.888602] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63626 ] 00:24:46.549 [2024-11-20 13:44:54.069547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:46.549 [2024-11-20 13:44:54.198701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.549 [2024-11-20 13:44:54.198782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.486 Running I/O for 5 seconds... 00:24:52.155 1058.00 IOPS, 66.12 MiB/s [2024-11-20T13:45:00.810Z] 2436.50 IOPS, 152.28 MiB/s [2024-11-20T13:45:01.068Z] 2619.33 IOPS, 163.71 MiB/s 00:24:53.349 Latency(us) 00:24:53.349 [2024-11-20T13:45:01.068Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0xbd0b 00:24:53.349 Nvme0n1 : 5.57 137.82 8.61 0.00 0.00 900958.20 22093.36 989049.85 00:24:53.349 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0xbd0b length 0xbd0b 00:24:53.349 Nvme0n1 : 5.74 121.95 7.62 0.00 0.00 1009627.91 31136.75 1450606.45 00:24:53.349 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x4ff8 00:24:53.349 Nvme1n1p1 : 5.58 137.75 8.61 0.00 0.00 880059.58 60441.94 857176.54 00:24:53.349 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x4ff8 length 0x4ff8 00:24:53.349 Nvme1n1p1 : 5.74 120.53 7.53 0.00 0.00 983377.46 47163.03 1267449.07 00:24:53.349 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x4ff7 00:24:53.349 Nvme1n1p2 : 5.74 89.27 5.58 0.00 0.00 1314889.33 113557.58 1787616.03 00:24:53.349 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x4ff7 length 0x4ff7 00:24:53.349 Nvme1n1p2 : 5.74 124.83 7.80 0.00 0.00 941291.63 64562.98 1494564.22 00:24:53.349 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x8000 00:24:53.349 Nvme2n1 : 5.74 142.58 8.91 0.00 0.00 807127.89 63189.30 934102.64 00:24:53.349 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x8000 length 0x8000 00:24:53.349 Nvme2n1 : 5.80 129.44 8.09 0.00 0.00 885769.42 55176.16 1509216.81 00:24:53.349 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x8000 00:24:53.349 Nvme2n2 : 5.80 149.07 9.32 0.00 0.00 756555.91 55176.16 937765.79 00:24:53.349 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x8000 length 0x8000 00:24:53.349 Nvme2n2 : 5.89 135.07 8.44 0.00 0.00 826040.36 58610.36 1531195.70 00:24:53.349 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x8000 00:24:53.349 Nvme2n3 : 5.86 158.17 9.89 0.00 0.00 697222.18 34113.06 945092.08 00:24:53.349 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x8000 length 0x8000 00:24:53.349 Nvme2n3 : 5.91 142.86 8.93 0.00 0.00 764222.17 11733.52 1560500.88 00:24:53.349 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x0 length 0x2000 00:24:53.349 Nvme3n1 : 5.91 173.19 10.82 0.00 0.00 622682.67 6524.98 952418.38 00:24:53.349 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:24:53.349 Verification LBA range: start 0x2000 length 0x2000 00:24:53.349 Nvme3n1 : 5.98 171.81 10.74 0.00 0.00 621799.86 1931.74 1194186.12 00:24:53.349 [2024-11-20T13:45:01.068Z] =================================================================================================================== 00:24:53.349 [2024-11-20T13:45:01.068Z] Total : 1934.35 120.90 0.00 0.00 830440.35 1931.74 1787616.03 00:24:56.641 00:24:56.641 real 0m10.279s 00:24:56.641 user 0m19.302s 00:24:56.641 sys 0m0.361s 00:24:56.641 ************************************ 00:24:56.641 END TEST bdev_verify_big_io 00:24:56.641 ************************************ 00:24:56.641 13:45:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:56.641 13:45:04 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:24:56.641 13:45:04 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:56.641 13:45:04 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:24:56.641 13:45:04 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:56.641 13:45:04 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:24:56.641 ************************************ 00:24:56.641 START TEST bdev_write_zeroes 00:24:56.641 ************************************ 00:24:56.641 13:45:04 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:24:56.641 [2024-11-20 13:45:04.235932] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:24:56.641 [2024-11-20 13:45:04.236164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63752 ] 00:24:56.901 [2024-11-20 13:45:04.414765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:56.901 [2024-11-20 13:45:04.539092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:57.839 Running I/O for 1 seconds... 00:24:58.773 58688.00 IOPS, 229.25 MiB/s 00:24:58.773 Latency(us) 00:24:58.773 [2024-11-20T13:45:06.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.773 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme0n1 : 1.03 8360.53 32.66 0.00 0.00 15273.84 12935.49 31823.59 00:24:58.773 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme1n1p1 : 1.03 8351.67 32.62 0.00 0.00 15266.43 12592.07 32052.54 00:24:58.773 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme1n1p2 : 1.03 8341.55 32.58 0.00 0.00 15166.53 12706.54 25985.45 00:24:58.773 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme2n1 : 1.03 8332.91 32.55 0.00 0.00 15140.23 12992.73 24611.77 00:24:58.773 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme2n2 : 1.03 8325.09 32.52 0.00 0.00 15113.19 12649.31 24611.77 00:24:58.773 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme2n3 : 1.03 8317.06 32.49 0.00 0.00 15089.44 10874.97 26214.40 00:24:58.773 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:24:58.773 Nvme3n1 : 1.03 8308.26 32.45 0.00 0.00 15069.43 9615.76 28274.92 00:24:58.773 [2024-11-20T13:45:06.492Z] =================================================================================================================== 00:24:58.773 [2024-11-20T13:45:06.492Z] Total : 58337.08 227.88 0.00 0.00 15159.87 9615.76 32052.54 00:25:00.154 ************************************ 00:25:00.154 END TEST bdev_write_zeroes 00:25:00.154 ************************************ 00:25:00.154 00:25:00.154 real 0m3.374s 00:25:00.154 user 0m3.007s 00:25:00.154 sys 0m0.249s 00:25:00.154 13:45:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.154 13:45:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:25:00.154 13:45:07 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.154 13:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:00.154 13:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.154 13:45:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:00.154 ************************************ 00:25:00.154 START TEST bdev_json_nonenclosed 00:25:00.154 ************************************ 00:25:00.154 13:45:07 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.154 [2024-11-20 13:45:07.662111] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:25:00.154 [2024-11-20 13:45:07.662303] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63809 ] 00:25:00.154 [2024-11-20 13:45:07.828124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.414 [2024-11-20 13:45:07.955919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.414 [2024-11-20 13:45:07.956099] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:25:00.414 [2024-11-20 13:45:07.956162] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:00.414 [2024-11-20 13:45:07.956189] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:00.673 00:25:00.673 real 0m0.657s 00:25:00.673 user 0m0.412s 00:25:00.673 sys 0m0.141s 00:25:00.673 13:45:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:00.673 ************************************ 00:25:00.673 END TEST bdev_json_nonenclosed 00:25:00.673 ************************************ 00:25:00.673 13:45:08 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:25:00.673 13:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.673 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:25:00.674 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:00.674 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:00.674 ************************************ 00:25:00.674 START TEST bdev_json_nonarray 00:25:00.674 ************************************ 00:25:00.674 13:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:25:00.674 [2024-11-20 13:45:08.373684] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:25:00.674 [2024-11-20 13:45:08.373932] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63836 ] 00:25:00.933 [2024-11-20 13:45:08.555460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.193 [2024-11-20 13:45:08.677864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.193 [2024-11-20 13:45:08.678039] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:25:01.193 [2024-11-20 13:45:08.678099] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:25:01.193 [2024-11-20 13:45:08.678162] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:01.453 00:25:01.453 real 0m0.665s 00:25:01.453 user 0m0.413s 00:25:01.453 sys 0m0.145s 00:25:01.453 ************************************ 00:25:01.453 END TEST bdev_json_nonarray 00:25:01.453 ************************************ 00:25:01.453 13:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:01.453 13:45:08 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:25:01.453 13:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:25:01.453 13:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:25:01.453 13:45:08 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:25:01.453 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:01.453 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:01.453 13:45:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:01.453 ************************************ 00:25:01.453 START TEST bdev_gpt_uuid 00:25:01.453 ************************************ 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63867 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63867 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63867 ']' 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.453 13:45:09 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:01.453 [2024-11-20 13:45:09.122903] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:25:01.453 [2024-11-20 13:45:09.123135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63867 ] 00:25:01.713 [2024-11-20 13:45:09.302630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.713 [2024-11-20 13:45:09.426754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.651 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.651 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:25:02.651 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:02.651 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.651 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:03.220 Some configs were skipped because the RPC state that can call them passed over. 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:25:03.220 { 00:25:03.220 "name": "Nvme1n1p1", 00:25:03.220 "aliases": [ 00:25:03.220 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:25:03.220 ], 00:25:03.220 "product_name": "GPT Disk", 00:25:03.220 "block_size": 4096, 00:25:03.220 "num_blocks": 655104, 00:25:03.220 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:25:03.220 "assigned_rate_limits": { 00:25:03.220 "rw_ios_per_sec": 0, 00:25:03.220 "rw_mbytes_per_sec": 0, 00:25:03.220 "r_mbytes_per_sec": 0, 00:25:03.220 "w_mbytes_per_sec": 0 00:25:03.220 }, 00:25:03.220 "claimed": false, 00:25:03.220 "zoned": false, 00:25:03.220 "supported_io_types": { 00:25:03.220 "read": true, 00:25:03.220 "write": true, 00:25:03.220 "unmap": true, 00:25:03.220 "flush": true, 00:25:03.220 "reset": true, 00:25:03.220 "nvme_admin": false, 00:25:03.220 "nvme_io": false, 00:25:03.220 "nvme_io_md": false, 00:25:03.220 "write_zeroes": true, 00:25:03.220 "zcopy": false, 00:25:03.220 "get_zone_info": false, 00:25:03.220 "zone_management": false, 00:25:03.220 "zone_append": false, 00:25:03.220 "compare": true, 00:25:03.220 "compare_and_write": false, 00:25:03.220 "abort": true, 00:25:03.220 "seek_hole": false, 00:25:03.220 "seek_data": false, 00:25:03.220 "copy": true, 00:25:03.220 "nvme_iov_md": false 00:25:03.220 }, 00:25:03.220 "driver_specific": { 00:25:03.220 "gpt": { 00:25:03.220 "base_bdev": "Nvme1n1", 00:25:03.220 "offset_blocks": 256, 00:25:03.220 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:25:03.220 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:25:03.220 "partition_name": "SPDK_TEST_first" 00:25:03.220 } 00:25:03.220 } 00:25:03.220 } 00:25:03.220 ]' 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:25:03.220 { 00:25:03.220 "name": "Nvme1n1p2", 00:25:03.220 "aliases": [ 00:25:03.220 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:25:03.220 ], 00:25:03.220 "product_name": "GPT Disk", 00:25:03.220 "block_size": 4096, 00:25:03.220 "num_blocks": 655103, 00:25:03.220 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:25:03.220 "assigned_rate_limits": { 00:25:03.220 "rw_ios_per_sec": 0, 00:25:03.220 "rw_mbytes_per_sec": 0, 00:25:03.220 "r_mbytes_per_sec": 0, 00:25:03.220 "w_mbytes_per_sec": 0 00:25:03.220 }, 00:25:03.220 "claimed": false, 00:25:03.220 "zoned": false, 00:25:03.220 "supported_io_types": { 00:25:03.220 "read": true, 00:25:03.220 "write": true, 00:25:03.220 "unmap": true, 00:25:03.220 "flush": true, 00:25:03.220 "reset": true, 00:25:03.220 "nvme_admin": false, 00:25:03.220 "nvme_io": false, 00:25:03.220 "nvme_io_md": false, 00:25:03.220 "write_zeroes": true, 00:25:03.220 "zcopy": false, 00:25:03.220 "get_zone_info": false, 00:25:03.220 "zone_management": false, 00:25:03.220 "zone_append": false, 00:25:03.220 "compare": true, 00:25:03.220 "compare_and_write": false, 00:25:03.220 "abort": true, 00:25:03.220 "seek_hole": false, 00:25:03.220 "seek_data": false, 00:25:03.220 "copy": true, 00:25:03.220 "nvme_iov_md": false 00:25:03.220 }, 00:25:03.220 "driver_specific": { 00:25:03.220 "gpt": { 00:25:03.220 "base_bdev": "Nvme1n1", 00:25:03.220 "offset_blocks": 655360, 00:25:03.220 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:25:03.220 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:25:03.220 "partition_name": "SPDK_TEST_second" 00:25:03.220 } 00:25:03.220 } 00:25:03.220 } 00:25:03.220 ]' 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:25:03.220 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63867 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63867 ']' 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63867 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.480 13:45:10 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63867 00:25:03.480 killing process with pid 63867 00:25:03.480 13:45:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:03.480 13:45:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:03.480 13:45:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63867' 00:25:03.480 13:45:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63867 00:25:03.480 13:45:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63867 00:25:06.015 00:25:06.015 real 0m4.504s 00:25:06.015 user 0m4.657s 00:25:06.015 sys 0m0.521s 00:25:06.015 ************************************ 00:25:06.015 END TEST bdev_gpt_uuid 00:25:06.015 ************************************ 00:25:06.015 13:45:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:06.015 13:45:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:25:06.015 13:45:13 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:06.594 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:06.594 Waiting for block devices as requested 00:25:06.855 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:06.855 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:06.855 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:25:07.114 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:25:12.390 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:25:12.390 13:45:19 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:25:12.390 13:45:19 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:25:12.390 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:25:12.390 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:25:12.390 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:25:12.390 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:25:12.390 13:45:19 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:25:12.390 00:25:12.390 real 1m7.783s 00:25:12.390 user 1m26.640s 00:25:12.390 sys 0m11.359s 00:25:12.390 13:45:19 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:12.390 ************************************ 00:25:12.390 END TEST blockdev_nvme_gpt 00:25:12.390 ************************************ 00:25:12.390 13:45:19 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:25:12.390 13:45:20 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:25:12.390 13:45:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:12.390 13:45:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:12.390 13:45:20 -- common/autotest_common.sh@10 -- # set +x 00:25:12.390 ************************************ 00:25:12.390 START TEST nvme 00:25:12.390 ************************************ 00:25:12.390 13:45:20 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:25:12.650 * Looking for test storage... 00:25:12.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:25:12.650 13:45:20 nvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:12.650 13:45:20 nvme -- common/autotest_common.sh@1693 -- # lcov --version 00:25:12.650 13:45:20 nvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:12.650 13:45:20 nvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:12.650 13:45:20 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.651 13:45:20 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.651 13:45:20 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.651 13:45:20 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.651 13:45:20 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.651 13:45:20 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.651 13:45:20 nvme -- scripts/common.sh@344 -- # case "$op" in 00:25:12.651 13:45:20 nvme -- scripts/common.sh@345 -- # : 1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.651 13:45:20 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.651 13:45:20 nvme -- scripts/common.sh@365 -- # decimal 1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@353 -- # local d=1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.651 13:45:20 nvme -- scripts/common.sh@355 -- # echo 1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.651 13:45:20 nvme -- scripts/common.sh@366 -- # decimal 2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@353 -- # local d=2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.651 13:45:20 nvme -- scripts/common.sh@355 -- # echo 2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.651 13:45:20 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.651 13:45:20 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.651 13:45:20 nvme -- scripts/common.sh@368 -- # return 0 00:25:12.651 13:45:20 nvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.651 13:45:20 nvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:12.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.651 --rc genhtml_branch_coverage=1 00:25:12.651 --rc genhtml_function_coverage=1 00:25:12.651 --rc genhtml_legend=1 00:25:12.651 --rc geninfo_all_blocks=1 00:25:12.651 --rc geninfo_unexecuted_blocks=1 00:25:12.651 00:25:12.651 ' 00:25:12.651 13:45:20 nvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:12.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.651 --rc genhtml_branch_coverage=1 00:25:12.651 --rc genhtml_function_coverage=1 00:25:12.651 --rc genhtml_legend=1 00:25:12.651 --rc geninfo_all_blocks=1 00:25:12.651 --rc geninfo_unexecuted_blocks=1 00:25:12.651 00:25:12.651 ' 00:25:12.651 13:45:20 nvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:12.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.651 --rc genhtml_branch_coverage=1 00:25:12.651 --rc genhtml_function_coverage=1 00:25:12.651 --rc genhtml_legend=1 00:25:12.651 --rc geninfo_all_blocks=1 00:25:12.651 --rc geninfo_unexecuted_blocks=1 00:25:12.651 00:25:12.651 ' 00:25:12.651 13:45:20 nvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:12.651 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.651 --rc genhtml_branch_coverage=1 00:25:12.651 --rc genhtml_function_coverage=1 00:25:12.651 --rc genhtml_legend=1 00:25:12.651 --rc geninfo_all_blocks=1 00:25:12.651 --rc geninfo_unexecuted_blocks=1 00:25:12.651 00:25:12.651 ' 00:25:12.651 13:45:20 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:13.220 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:14.161 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:25:14.161 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:14.161 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:14.161 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:25:14.161 13:45:21 nvme -- nvme/nvme.sh@79 -- # uname 00:25:14.161 13:45:21 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:25:14.161 13:45:21 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:25:14.161 13:45:21 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:25:14.161 Waiting for stub to ready for secondary processes... 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1075 -- # stubpid=64526 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64526 ]] 00:25:14.161 13:45:21 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:25:14.420 [2024-11-20 13:45:21.902347] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:25:14.420 [2024-11-20 13:45:21.902500] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:25:15.359 13:45:22 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:25:15.359 13:45:22 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64526 ]] 00:25:15.359 13:45:22 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:25:15.359 [2024-11-20 13:45:22.925452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.359 [2024-11-20 13:45:23.046568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.359 [2024-11-20 13:45:23.046699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.359 [2024-11-20 13:45:23.046768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.359 [2024-11-20 13:45:23.066000] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:25:15.359 [2024-11-20 13:45:23.066179] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:25:15.620 [2024-11-20 13:45:23.078960] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:25:15.620 [2024-11-20 13:45:23.079216] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:25:15.620 [2024-11-20 13:45:23.082740] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:25:15.620 [2024-11-20 13:45:23.083204] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:25:15.620 [2024-11-20 13:45:23.083360] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:25:15.620 [2024-11-20 13:45:23.087153] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:25:15.620 [2024-11-20 13:45:23.087503] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:25:15.620 [2024-11-20 13:45:23.087669] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:25:15.620 [2024-11-20 13:45:23.091108] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:25:15.620 [2024-11-20 13:45:23.091414] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:25:15.620 [2024-11-20 13:45:23.091544] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:25:15.620 [2024-11-20 13:45:23.091630] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:25:15.620 [2024-11-20 13:45:23.091699] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:25:16.190 13:45:23 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:25:16.190 13:45:23 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:25:16.190 done. 00:25:16.190 13:45:23 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:25:16.190 13:45:23 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:25:16.190 13:45:23 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.190 13:45:23 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:16.190 ************************************ 00:25:16.190 START TEST nvme_reset 00:25:16.190 ************************************ 00:25:16.190 13:45:23 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:25:16.450 Initializing NVMe Controllers 00:25:16.450 Skipping QEMU NVMe SSD at 0000:00:10.0 00:25:16.450 Skipping QEMU NVMe SSD at 0000:00:11.0 00:25:16.450 Skipping QEMU NVMe SSD at 0000:00:13.0 00:25:16.450 Skipping QEMU NVMe SSD at 0000:00:12.0 00:25:16.450 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:25:16.450 00:25:16.450 real 0m0.288s 00:25:16.450 user 0m0.094s 00:25:16.450 sys 0m0.150s 00:25:16.450 13:45:24 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.450 13:45:24 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:25:16.450 ************************************ 00:25:16.450 END TEST nvme_reset 00:25:16.450 ************************************ 00:25:16.710 13:45:24 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:25:16.710 13:45:24 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:16.710 13:45:24 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.710 13:45:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:16.710 ************************************ 00:25:16.710 START TEST nvme_identify 00:25:16.710 ************************************ 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:25:16.710 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:25:16.710 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:25:16.710 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:25:16.710 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:16.710 13:45:24 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:16.710 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:25:16.972 [2024-11-20 13:45:24.551376] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64559 terminated unexpected 00:25:16.972 ===================================================== 00:25:16.972 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:16.972 ===================================================== 00:25:16.972 Controller Capabilities/Features 00:25:16.972 ================================ 00:25:16.972 Vendor ID: 1b36 00:25:16.972 Subsystem Vendor ID: 1af4 00:25:16.972 Serial Number: 12340 00:25:16.972 Model Number: QEMU NVMe Ctrl 00:25:16.972 Firmware Version: 8.0.0 00:25:16.972 Recommended Arb Burst: 6 00:25:16.972 IEEE OUI Identifier: 00 54 52 00:25:16.972 Multi-path I/O 00:25:16.972 May have multiple subsystem ports: No 00:25:16.972 May have multiple controllers: No 00:25:16.972 Associated with SR-IOV VF: No 00:25:16.972 Max Data Transfer Size: 524288 00:25:16.972 Max Number of Namespaces: 256 00:25:16.972 Max Number of I/O Queues: 64 00:25:16.972 NVMe Specification Version (VS): 1.4 00:25:16.972 NVMe Specification Version (Identify): 1.4 00:25:16.972 Maximum Queue Entries: 2048 00:25:16.972 Contiguous Queues Required: Yes 00:25:16.972 Arbitration Mechanisms Supported 00:25:16.972 Weighted Round Robin: Not Supported 00:25:16.972 Vendor Specific: Not Supported 00:25:16.972 Reset Timeout: 7500 ms 00:25:16.972 Doorbell Stride: 4 bytes 00:25:16.972 NVM Subsystem Reset: Not Supported 00:25:16.972 Command Sets Supported 00:25:16.972 NVM Command Set: Supported 00:25:16.972 Boot Partition: Not Supported 00:25:16.972 Memory Page Size Minimum: 4096 bytes 00:25:16.972 Memory Page Size Maximum: 65536 bytes 00:25:16.972 Persistent Memory Region: Not Supported 00:25:16.972 Optional Asynchronous Events Supported 00:25:16.972 Namespace Attribute Notices: Supported 00:25:16.972 Firmware Activation Notices: Not Supported 00:25:16.972 ANA Change Notices: Not Supported 00:25:16.972 PLE Aggregate Log Change Notices: Not Supported 00:25:16.972 LBA Status Info Alert Notices: Not Supported 00:25:16.972 EGE Aggregate Log Change Notices: Not Supported 00:25:16.972 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.972 Zone Descriptor Change Notices: Not Supported 00:25:16.972 Discovery Log Change Notices: Not Supported 00:25:16.972 Controller Attributes 00:25:16.972 128-bit Host Identifier: Not Supported 00:25:16.972 Non-Operational Permissive Mode: Not Supported 00:25:16.972 NVM Sets: Not Supported 00:25:16.972 Read Recovery Levels: Not Supported 00:25:16.972 Endurance Groups: Not Supported 00:25:16.972 Predictable Latency Mode: Not Supported 00:25:16.972 Traffic Based Keep ALive: Not Supported 00:25:16.972 Namespace Granularity: Not Supported 00:25:16.972 SQ Associations: Not Supported 00:25:16.972 UUID List: Not Supported 00:25:16.972 Multi-Domain Subsystem: Not Supported 00:25:16.972 Fixed Capacity Management: Not Supported 00:25:16.972 Variable Capacity Management: Not Supported 00:25:16.972 Delete Endurance Group: Not Supported 00:25:16.972 Delete NVM Set: Not Supported 00:25:16.972 Extended LBA Formats Supported: Supported 00:25:16.972 Flexible Data Placement Supported: Not Supported 00:25:16.972 00:25:16.972 Controller Memory Buffer Support 00:25:16.972 ================================ 00:25:16.972 Supported: No 00:25:16.972 00:25:16.972 Persistent Memory Region Support 00:25:16.972 ================================ 00:25:16.972 Supported: No 00:25:16.972 00:25:16.972 Admin Command Set Attributes 00:25:16.972 ============================ 00:25:16.972 Security Send/Receive: Not Supported 00:25:16.972 Format NVM: Supported 00:25:16.972 Firmware Activate/Download: Not Supported 00:25:16.972 Namespace Management: Supported 00:25:16.972 Device Self-Test: Not Supported 00:25:16.972 Directives: Supported 00:25:16.972 NVMe-MI: Not Supported 00:25:16.972 Virtualization Management: Not Supported 00:25:16.972 Doorbell Buffer Config: Supported 00:25:16.972 Get LBA Status Capability: Not Supported 00:25:16.972 Command & Feature Lockdown Capability: Not Supported 00:25:16.972 Abort Command Limit: 4 00:25:16.972 Async Event Request Limit: 4 00:25:16.972 Number of Firmware Slots: N/A 00:25:16.972 Firmware Slot 1 Read-Only: N/A 00:25:16.972 Firmware Activation Without Reset: N/A 00:25:16.972 Multiple Update Detection Support: N/A 00:25:16.972 Firmware Update Granularity: No Information Provided 00:25:16.972 Per-Namespace SMART Log: Yes 00:25:16.972 Asymmetric Namespace Access Log Page: Not Supported 00:25:16.972 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:25:16.972 Command Effects Log Page: Supported 00:25:16.972 Get Log Page Extended Data: Supported 00:25:16.972 Telemetry Log Pages: Not Supported 00:25:16.972 Persistent Event Log Pages: Not Supported 00:25:16.972 Supported Log Pages Log Page: May Support 00:25:16.972 Commands Supported & Effects Log Page: Not Supported 00:25:16.972 Feature Identifiers & Effects Log Page:May Support 00:25:16.972 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.972 Data Area 4 for Telemetry Log: Not Supported 00:25:16.972 Error Log Page Entries Supported: 1 00:25:16.972 Keep Alive: Not Supported 00:25:16.972 00:25:16.972 NVM Command Set Attributes 00:25:16.972 ========================== 00:25:16.972 Submission Queue Entry Size 00:25:16.972 Max: 64 00:25:16.972 Min: 64 00:25:16.972 Completion Queue Entry Size 00:25:16.972 Max: 16 00:25:16.972 Min: 16 00:25:16.972 Number of Namespaces: 256 00:25:16.972 Compare Command: Supported 00:25:16.972 Write Uncorrectable Command: Not Supported 00:25:16.972 Dataset Management Command: Supported 00:25:16.972 Write Zeroes Command: Supported 00:25:16.972 Set Features Save Field: Supported 00:25:16.972 Reservations: Not Supported 00:25:16.972 Timestamp: Supported 00:25:16.972 Copy: Supported 00:25:16.972 Volatile Write Cache: Present 00:25:16.972 Atomic Write Unit (Normal): 1 00:25:16.972 Atomic Write Unit (PFail): 1 00:25:16.972 Atomic Compare & Write Unit: 1 00:25:16.973 Fused Compare & Write: Not Supported 00:25:16.973 Scatter-Gather List 00:25:16.973 SGL Command Set: Supported 00:25:16.973 SGL Keyed: Not Supported 00:25:16.973 SGL Bit Bucket Descriptor: Not Supported 00:25:16.973 SGL Metadata Pointer: Not Supported 00:25:16.973 Oversized SGL: Not Supported 00:25:16.973 SGL Metadata Address: Not Supported 00:25:16.973 SGL Offset: Not Supported 00:25:16.973 Transport SGL Data Block: Not Supported 00:25:16.973 Replay Protected Memory Block: Not Supported 00:25:16.973 00:25:16.973 Firmware Slot Information 00:25:16.973 ========================= 00:25:16.973 Active slot: 1 00:25:16.973 Slot 1 Firmware Revision: 1.0 00:25:16.973 00:25:16.973 00:25:16.973 Commands Supported and Effects 00:25:16.973 ============================== 00:25:16.973 Admin Commands 00:25:16.973 -------------- 00:25:16.973 Delete I/O Submission Queue (00h): Supported 00:25:16.973 Create I/O Submission Queue (01h): Supported 00:25:16.973 Get Log Page (02h): Supported 00:25:16.973 Delete I/O Completion Queue (04h): Supported 00:25:16.973 Create I/O Completion Queue (05h): Supported 00:25:16.973 Identify (06h): Supported 00:25:16.973 Abort (08h): Supported 00:25:16.973 Set Features (09h): Supported 00:25:16.973 Get Features (0Ah): Supported 00:25:16.973 Asynchronous Event Request (0Ch): Supported 00:25:16.973 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:16.973 Directive Send (19h): Supported 00:25:16.973 Directive Receive (1Ah): Supported 00:25:16.973 Virtualization Management (1Ch): Supported 00:25:16.973 Doorbell Buffer Config (7Ch): Supported 00:25:16.973 Format NVM (80h): Supported LBA-Change 00:25:16.973 I/O Commands 00:25:16.973 ------------ 00:25:16.973 Flush (00h): Supported LBA-Change 00:25:16.973 Write (01h): Supported LBA-Change 00:25:16.973 Read (02h): Supported 00:25:16.973 Compare (05h): Supported 00:25:16.973 Write Zeroes (08h): Supported LBA-Change 00:25:16.973 Dataset Management (09h): Supported LBA-Change 00:25:16.973 Unknown (0Ch): Supported 00:25:16.973 Unknown (12h): Supported 00:25:16.973 Copy (19h): Supported LBA-Change 00:25:16.973 Unknown (1Dh): Supported LBA-Change 00:25:16.973 00:25:16.973 Error Log 00:25:16.973 ========= 00:25:16.973 00:25:16.973 Arbitration 00:25:16.973 =========== 00:25:16.973 Arbitration Burst: no limit 00:25:16.973 00:25:16.973 Power Management 00:25:16.973 ================ 00:25:16.973 Number of Power States: 1 00:25:16.973 Current Power State: Power State #0 00:25:16.973 Power State #0: 00:25:16.973 Max Power: 25.00 W 00:25:16.973 Non-Operational State: Operational 00:25:16.973 Entry Latency: 16 microseconds 00:25:16.973 Exit Latency: 4 microseconds 00:25:16.973 Relative Read Throughput: 0 00:25:16.973 Relative Read Latency: 0 00:25:16.973 Relative Write Throughput: 0 00:25:16.973 Relative Write Latency: 0 00:25:16.973 Idle Power[2024-11-20 13:45:24.552634] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64559 termina: Not Reported 00:25:16.973 Active Power: Not Reported 00:25:16.973 Non-Operational Permissive Mode: Not Supported 00:25:16.973 00:25:16.973 Health Information 00:25:16.973 ================== 00:25:16.973 Critical Warnings: 00:25:16.973 Available Spare Space: OK 00:25:16.973 Temperature: OK 00:25:16.973 Device Reliability: OK 00:25:16.973 Read Only: No 00:25:16.973 Volatile Memory Backup: OK 00:25:16.973 Current Temperature: 323 Kelvin (50 Celsius) 00:25:16.973 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:16.973 Available Spare: 0% 00:25:16.973 Available Spare Threshold: 0% 00:25:16.973 Life Percentage Used: 0% 00:25:16.973 Data Units Read: 698 00:25:16.973 Data Units Written: 626 00:25:16.973 Host Read Commands: 33728 00:25:16.973 Host Write Commands: 33514 00:25:16.973 Controller Busy Time: 0 minutes 00:25:16.973 Power Cycles: 0 00:25:16.973 Power On Hours: 0 hours 00:25:16.973 Unsafe Shutdowns: 0 00:25:16.973 Unrecoverable Media Errors: 0 00:25:16.973 Lifetime Error Log Entries: 0 00:25:16.973 Warning Temperature Time: 0 minutes 00:25:16.973 Critical Temperature Time: 0 minutes 00:25:16.973 00:25:16.973 Number of Queues 00:25:16.973 ================ 00:25:16.973 Number of I/O Submission Queues: 64 00:25:16.973 Number of I/O Completion Queues: 64 00:25:16.973 00:25:16.973 ZNS Specific Controller Data 00:25:16.973 ============================ 00:25:16.973 Zone Append Size Limit: 0 00:25:16.973 00:25:16.973 00:25:16.973 Active Namespaces 00:25:16.973 ================= 00:25:16.973 Namespace ID:1 00:25:16.973 Error Recovery Timeout: Unlimited 00:25:16.973 Command Set Identifier: NVM (00h) 00:25:16.973 Deallocate: Supported 00:25:16.973 Deallocated/Unwritten Error: Supported 00:25:16.973 Deallocated Read Value: All 0x00 00:25:16.973 Deallocate in Write Zeroes: Not Supported 00:25:16.973 Deallocated Guard Field: 0xFFFF 00:25:16.973 Flush: Supported 00:25:16.973 Reservation: Not Supported 00:25:16.973 Metadata Transferred as: Separate Metadata Buffer 00:25:16.973 Namespace Sharing Capabilities: Private 00:25:16.973 Size (in LBAs): 1548666 (5GiB) 00:25:16.973 Capacity (in LBAs): 1548666 (5GiB) 00:25:16.973 Utilization (in LBAs): 1548666 (5GiB) 00:25:16.973 Thin Provisioning: Not Supported 00:25:16.973 Per-NS Atomic Units: No 00:25:16.973 Maximum Single Source Range Length: 128 00:25:16.973 Maximum Copy Length: 128 00:25:16.973 Maximum Source Range Count: 128 00:25:16.973 NGUID/EUI64 Never Reused: No 00:25:16.973 Namespace Write Protected: No 00:25:16.973 Number of LBA Formats: 8 00:25:16.973 Current LBA Format: LBA Format #07 00:25:16.973 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.973 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.973 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.973 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.973 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.973 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.973 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.973 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.973 00:25:16.973 NVM Specific Namespace Data 00:25:16.973 =========================== 00:25:16.973 Logical Block Storage Tag Mask: 0 00:25:16.973 Protection Information Capabilities: 00:25:16.973 16b Guard Protection Information Storage Tag Support: No 00:25:16.973 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.974 Storage Tag Check Read Support: No 00:25:16.974 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.974 ===================================================== 00:25:16.974 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:25:16.974 ===================================================== 00:25:16.974 Controller Capabilities/Features 00:25:16.974 ================================ 00:25:16.974 Vendor ID: 1b36 00:25:16.974 Subsystem Vendor ID: 1af4 00:25:16.974 Serial Number: 12341 00:25:16.974 Model Number: QEMU NVMe Ctrl 00:25:16.974 Firmware Version: 8.0.0 00:25:16.974 Recommended Arb Burst: 6 00:25:16.974 IEEE OUI Identifier: 00 54 52 00:25:16.974 Multi-path I/O 00:25:16.974 May have multiple subsystem ports: No 00:25:16.974 May have multiple controllers: No 00:25:16.974 Associated with SR-IOV VF: No 00:25:16.974 Max Data Transfer Size: 524288 00:25:16.974 Max Number of Namespaces: 256 00:25:16.974 Max Number of I/O Queues: 64 00:25:16.974 NVMe Specification Version (VS): 1.4 00:25:16.974 NVMe Specification Version (Identify): 1.4 00:25:16.974 Maximum Queue Entries: 2048 00:25:16.974 Contiguous Queues Required: Yes 00:25:16.974 Arbitration Mechanisms Supported 00:25:16.974 Weighted Round Robin: Not Supported 00:25:16.974 Vendor Specific: Not Supported 00:25:16.974 Reset Timeout: 7500 ms 00:25:16.974 Doorbell Stride: 4 bytes 00:25:16.974 NVM Subsystem Reset: Not Supported 00:25:16.974 Command Sets Supported 00:25:16.974 NVM Command Set: Supported 00:25:16.974 Boot Partition: Not Supported 00:25:16.974 Memory Page Size Minimum: 4096 bytes 00:25:16.974 Memory Page Size Maximum: 65536 bytes 00:25:16.974 Persistent Memory Region: Not Supported 00:25:16.974 Optional Asynchronous Events Supported 00:25:16.974 Namespace Attribute Notices: Supported 00:25:16.974 Firmware Activation Notices: Not Supported 00:25:16.974 ANA Change Notices: Not Supported 00:25:16.974 PLE Aggregate Log Change Notices: Not Supported 00:25:16.974 LBA Status Info Alert Notices: Not Supported 00:25:16.974 EGE Aggregate Log Change Notices: Not Supported 00:25:16.974 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.974 Zone Descriptor Change Notices: Not Supported 00:25:16.974 Discovery Log Change Notices: Not Supported 00:25:16.974 Controller Attributes 00:25:16.974 128-bit Host Identifier: Not Supported 00:25:16.974 Non-Operational Permissive Mode: Not Supported 00:25:16.974 NVM Sets: Not Supported 00:25:16.974 Read Recovery Levels: Not Supported 00:25:16.974 Endurance Groups: Not Supported 00:25:16.974 Predictable Latency Mode: Not Supported 00:25:16.974 Traffic Based Keep ALive: Not Supported 00:25:16.974 Namespace Granularity: Not Supported 00:25:16.974 SQ Associations: Not Supported 00:25:16.974 UUID List: Not Supported 00:25:16.974 Multi-Domain Subsystem: Not Supported 00:25:16.974 Fixed Capacity Management: Not Supported 00:25:16.974 Variable Capacity Management: Not Supported 00:25:16.974 Delete Endurance Group: Not Supported 00:25:16.974 Delete NVM Set: Not Supported 00:25:16.974 Extended LBA Formats Supported: Supported 00:25:16.974 Flexible Data Placement Supported: Not Supported 00:25:16.974 00:25:16.974 Controller Memory Buffer Support 00:25:16.974 ================================ 00:25:16.974 Supported: No 00:25:16.974 00:25:16.974 Persistent Memory Region Support 00:25:16.974 ================================ 00:25:16.974 Supported: No 00:25:16.974 00:25:16.974 Admin Command Set Attributes 00:25:16.974 ============================ 00:25:16.974 Security Send/Receive: Not Supported 00:25:16.974 Format NVM: Supported 00:25:16.974 Firmware Activate/Download: Not Supported 00:25:16.974 Namespace Management: Supported 00:25:16.974 Device Self-Test: Not Supported 00:25:16.974 Directives: Supported 00:25:16.974 NVMe-MI: Not Supported 00:25:16.974 Virtualization Management: Not Supported 00:25:16.974 Doorbell Buffer Config: Supported 00:25:16.974 Get LBA Status Capability: Not Supported 00:25:16.974 Command & Feature Lockdown Capability: Not Supported 00:25:16.974 Abort Command Limit: 4 00:25:16.974 Async Event Request Limit: 4 00:25:16.974 Number of Firmware Slots: N/A 00:25:16.974 Firmware Slot 1 Read-Only: N/A 00:25:16.974 Firmware Activation Without Reset: N/A 00:25:16.974 Multiple Update Detection Support: N/A 00:25:16.974 Firmware Update Granularity: No Information Provided 00:25:16.974 Per-Namespace SMART Log: Yes 00:25:16.974 Asymmetric Namespace Access Log Page: Not Supported 00:25:16.974 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:25:16.974 Command Effects Log Page: Supported 00:25:16.974 Get Log Page Extended Data: Supported 00:25:16.974 Telemetry Log Pages: Not Supported 00:25:16.974 Persistent Event Log Pages: Not Supported 00:25:16.974 Supported Log Pages Log Page: May Support 00:25:16.974 Commands Supported & Effects Log Page: Not Supported 00:25:16.974 Feature Identifiers & Effects Log Page:May Support 00:25:16.974 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.974 Data Area 4 for Telemetry Log: Not Supported 00:25:16.974 Error Log Page Entries Supported: 1 00:25:16.974 Keep Alive: Not Supported 00:25:16.974 00:25:16.974 NVM Command Set Attributes 00:25:16.974 ========================== 00:25:16.974 Submission Queue Entry Size 00:25:16.974 Max: 64 00:25:16.974 Min: 64 00:25:16.974 Completion Queue Entry Size 00:25:16.974 Max: 16 00:25:16.974 Min: 16 00:25:16.974 Number of Namespaces: 256 00:25:16.974 Compare Command: Supported 00:25:16.974 Write Uncorrectable Command: Not Supported 00:25:16.974 Dataset Management Command: Supported 00:25:16.974 Write Zeroes Command: Supported 00:25:16.974 Set Features Save Field: Supported 00:25:16.974 Reservations: Not Supported 00:25:16.974 Timestamp: Supported 00:25:16.974 Copy: Supported 00:25:16.974 Volatile Write Cache: Present 00:25:16.974 Atomic Write Unit (Normal): 1 00:25:16.974 Atomic Write Unit (PFail): 1 00:25:16.974 Atomic Compare & Write Unit: 1 00:25:16.974 Fused Compare & Write: Not Supported 00:25:16.974 Scatter-Gather List 00:25:16.974 SGL Command Set: Supported 00:25:16.974 SGL Keyed: Not Supported 00:25:16.974 SGL Bit Bucket Descriptor: Not Supported 00:25:16.974 SGL Metadata Pointer: Not Supported 00:25:16.974 Oversized SGL: Not Supported 00:25:16.974 SGL Metadata Address: Not Supported 00:25:16.974 SGL Offset: Not Supported 00:25:16.974 Transport SGL Data Block: Not Supported 00:25:16.974 Replay Protected Memory Block: Not Supported 00:25:16.974 00:25:16.974 Firmware Slot Information 00:25:16.974 ========================= 00:25:16.974 Active slot: 1 00:25:16.974 Slot 1 Firmware Revision: 1.0 00:25:16.974 00:25:16.974 00:25:16.974 Commands Supported and Effects 00:25:16.974 ============================== 00:25:16.974 Admin Commands 00:25:16.974 -------------- 00:25:16.974 Delete I/O Submission Queue (00h): Supported 00:25:16.974 Create I/O Submission Queue (01h): Supported 00:25:16.974 Get Log Page (02h): Supported 00:25:16.974 Delete I/O Completion Queue (04h): Supported 00:25:16.974 Create I/O Completion Queue (05h): Supported 00:25:16.974 Identify (06h): Supported 00:25:16.974 Abort (08h): Supported 00:25:16.974 Set Features (09h): Supported 00:25:16.974 Get Features (0Ah): Supported 00:25:16.974 Asynchronous Event Request (0Ch): Supported 00:25:16.974 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:16.974 Directive Send (19h): Supported 00:25:16.974 Directive Receive (1Ah): Supported 00:25:16.974 Virtualization Management (1Ch): Supported 00:25:16.974 Doorbell Buffer Config (7Ch): Supported 00:25:16.974 Format NVM (80h): Supported LBA-Change 00:25:16.974 I/O Commands 00:25:16.974 ------------ 00:25:16.974 Flush (00h): Supported LBA-Change 00:25:16.975 Write (01h): Supported LBA-Change 00:25:16.975 Read (02h): Supported 00:25:16.975 Compare (05h): Supported 00:25:16.975 Write Zeroes (08h): Supported LBA-Change 00:25:16.975 Dataset Management (09h): Supported LBA-Change 00:25:16.975 Unknown (0Ch): Supported 00:25:16.975 Unknown (12h): Supported 00:25:16.975 Copy (19h): Supported LBA-Change 00:25:16.975 Unknown (1Dh): Supported LBA-Change 00:25:16.975 00:25:16.975 Error Log 00:25:16.975 ========= 00:25:16.975 00:25:16.975 Arbitration 00:25:16.975 =========== 00:25:16.975 Arbitration Burst: no limit 00:25:16.975 00:25:16.975 Power Management 00:25:16.975 ================ 00:25:16.975 Number of Power States: 1 00:25:16.975 Current Power State: Power State #0 00:25:16.975 Power State #0: 00:25:16.975 Max Power: 25.00 W 00:25:16.975 Non-Operational State: Operational 00:25:16.975 Entry Latency: 16 microseconds 00:25:16.975 Exit Latency: 4 microseconds 00:25:16.975 Relative Read Throughput: 0 00:25:16.975 Relative Read Latency: 0 00:25:16.975 Relative Write Throughput: 0 00:25:16.975 Relative Write Latency: 0 00:25:16.975 Idle Power: Not Reported 00:25:16.975 Active Power: Not Reported 00:25:16.975 Non-Operational Permissive Mode: Not Supported 00:25:16.975 00:25:16.975 Health Information 00:25:16.975 ================== 00:25:16.975 Critical Warnings: 00:25:16.975 Available Spare Space: OK 00:25:16.975 Temperature: ted unexpected 00:25:16.975 [2024-11-20 13:45:24.553233] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64559 terminated unexpected 00:25:16.975 OK 00:25:16.975 Device Reliability: OK 00:25:16.975 Read Only: No 00:25:16.975 Volatile Memory Backup: OK 00:25:16.975 Current Temperature: 323 Kelvin (50 Celsius) 00:25:16.975 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:16.975 Available Spare: 0% 00:25:16.975 Available Spare Threshold: 0% 00:25:16.975 Life Percentage Used: 0% 00:25:16.975 Data Units Read: 1036 00:25:16.975 Data Units Written: 903 00:25:16.975 Host Read Commands: 49919 00:25:16.975 Host Write Commands: 48704 00:25:16.975 Controller Busy Time: 0 minutes 00:25:16.975 Power Cycles: 0 00:25:16.975 Power On Hours: 0 hours 00:25:16.975 Unsafe Shutdowns: 0 00:25:16.975 Unrecoverable Media Errors: 0 00:25:16.975 Lifetime Error Log Entries: 0 00:25:16.975 Warning Temperature Time: 0 minutes 00:25:16.975 Critical Temperature Time: 0 minutes 00:25:16.975 00:25:16.975 Number of Queues 00:25:16.975 ================ 00:25:16.975 Number of I/O Submission Queues: 64 00:25:16.975 Number of I/O Completion Queues: 64 00:25:16.975 00:25:16.975 ZNS Specific Controller Data 00:25:16.975 ============================ 00:25:16.975 Zone Append Size Limit: 0 00:25:16.975 00:25:16.975 00:25:16.975 Active Namespaces 00:25:16.975 ================= 00:25:16.975 Namespace ID:1 00:25:16.975 Error Recovery Timeout: Unlimited 00:25:16.975 Command Set Identifier: NVM (00h) 00:25:16.975 Deallocate: Supported 00:25:16.975 Deallocated/Unwritten Error: Supported 00:25:16.975 Deallocated Read Value: All 0x00 00:25:16.975 Deallocate in Write Zeroes: Not Supported 00:25:16.975 Deallocated Guard Field: 0xFFFF 00:25:16.975 Flush: Supported 00:25:16.975 Reservation: Not Supported 00:25:16.975 Namespace Sharing Capabilities: Private 00:25:16.975 Size (in LBAs): 1310720 (5GiB) 00:25:16.975 Capacity (in LBAs): 1310720 (5GiB) 00:25:16.975 Utilization (in LBAs): 1310720 (5GiB) 00:25:16.975 Thin Provisioning: Not Supported 00:25:16.975 Per-NS Atomic Units: No 00:25:16.975 Maximum Single Source Range Length: 128 00:25:16.975 Maximum Copy Length: 128 00:25:16.975 Maximum Source Range Count: 128 00:25:16.975 NGUID/EUI64 Never Reused: No 00:25:16.975 Namespace Write Protected: No 00:25:16.975 Number of LBA Formats: 8 00:25:16.975 Current LBA Format: LBA Format #04 00:25:16.975 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.975 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.975 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.975 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.975 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.975 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.975 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.975 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.975 00:25:16.975 NVM Specific Namespace Data 00:25:16.975 =========================== 00:25:16.975 Logical Block Storage Tag Mask: 0 00:25:16.975 Protection Information Capabilities: 00:25:16.975 16b Guard Protection Information Storage Tag Support: No 00:25:16.975 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.975 Storage Tag Check Read Support: No 00:25:16.975 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.975 ===================================================== 00:25:16.975 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:25:16.975 ===================================================== 00:25:16.975 Controller Capabilities/Features 00:25:16.975 ================================ 00:25:16.975 Vendor ID: 1b36 00:25:16.975 Subsystem Vendor ID: 1af4 00:25:16.975 Serial Number: 12343 00:25:16.975 Model Number: QEMU NVMe Ctrl 00:25:16.975 Firmware Version: 8.0.0 00:25:16.975 Recommended Arb Burst: 6 00:25:16.975 IEEE OUI Identifier: 00 54 52 00:25:16.975 Multi-path I/O 00:25:16.975 May have multiple subsystem ports: No 00:25:16.975 May have multiple controllers: Yes 00:25:16.975 Associated with SR-IOV VF: No 00:25:16.975 Max Data Transfer Size: 524288 00:25:16.975 Max Number of Namespaces: 256 00:25:16.975 Max Number of I/O Queues: 64 00:25:16.975 NVMe Specification Version (VS): 1.4 00:25:16.975 NVMe Specification Version (Identify): 1.4 00:25:16.975 Maximum Queue Entries: 2048 00:25:16.975 Contiguous Queues Required: Yes 00:25:16.975 Arbitration Mechanisms Supported 00:25:16.975 Weighted Round Robin: Not Supported 00:25:16.975 Vendor Specific: Not Supported 00:25:16.975 Reset Timeout: 7500 ms 00:25:16.975 Doorbell Stride: 4 bytes 00:25:16.975 NVM Subsystem Reset: Not Supported 00:25:16.976 Command Sets Supported 00:25:16.976 NVM Command Set: Supported 00:25:16.976 Boot Partition: Not Supported 00:25:16.976 Memory Page Size Minimum: 4096 bytes 00:25:16.976 Memory Page Size Maximum: 65536 bytes 00:25:16.976 Persistent Memory Region: Not Supported 00:25:16.976 Optional Asynchronous Events Supported 00:25:16.976 Namespace Attribute Notices: Supported 00:25:16.976 Firmware Activation Notices: Not Supported 00:25:16.976 ANA Change Notices: Not Supported 00:25:16.976 PLE Aggregate Log Change Notices: Not Supported 00:25:16.976 LBA Status Info Alert Notices: Not Supported 00:25:16.976 EGE Aggregate Log Change Notices: Not Supported 00:25:16.976 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.976 Zone Descriptor Change Notices: Not Supported 00:25:16.976 Discovery Log Change Notices: Not Supported 00:25:16.976 Controller Attributes 00:25:16.976 128-bit Host Identifier: Not Supported 00:25:16.976 Non-Operational Permissive Mode: Not Supported 00:25:16.976 NVM Sets: Not Supported 00:25:16.976 Read Recovery Levels: Not Supported 00:25:16.976 Endurance Groups: Supported 00:25:16.976 Predictable Latency Mode: Not Supported 00:25:16.976 Traffic Based Keep ALive: Not Supported 00:25:16.976 Namespace Granularity: Not Supported 00:25:16.976 SQ Associations: Not Supported 00:25:16.976 UUID List: Not Supported 00:25:16.976 Multi-Domain Subsystem: Not Supported 00:25:16.976 Fixed Capacity Management: Not Supported 00:25:16.976 Variable Capacity Management: Not Supported 00:25:16.976 Delete Endurance Group: Not Supported 00:25:16.976 Delete NVM Set: Not Supported 00:25:16.976 Extended LBA Formats Supported: Supported 00:25:16.976 Flexible Data Placement Supported: Supported 00:25:16.976 00:25:16.976 Controller Memory Buffer Support 00:25:16.976 ================================ 00:25:16.976 Supported: No 00:25:16.976 00:25:16.976 Persistent Memory Region Support 00:25:16.976 ================================ 00:25:16.976 Supported: No 00:25:16.976 00:25:16.976 Admin Command Set Attributes 00:25:16.976 ============================ 00:25:16.976 Security Send/Receive: Not Supported 00:25:16.976 Format NVM: Supported 00:25:16.976 Firmware Activate/Download: Not Supported 00:25:16.976 Namespace Management: Supported 00:25:16.976 Device Self-Test: Not Supported 00:25:16.976 Directives: Supported 00:25:16.976 NVMe-MI: Not Supported 00:25:16.976 Virtualization Management: Not Supported 00:25:16.976 Doorbell Buffer Config: Supported 00:25:16.976 Get LBA Status Capability: Not Supported 00:25:16.976 Command & Feature Lockdown Capability: Not Supported 00:25:16.976 Abort Command Limit: 4 00:25:16.976 Async Event Request Limit: 4 00:25:16.976 Number of Firmware Slots: N/A 00:25:16.976 Firmware Slot 1 Read-Only: N/A 00:25:16.976 Firmware Activation Without Reset: N/A 00:25:16.976 Multiple Update Detection Support: N/A 00:25:16.976 Firmware Update Granularity: No Information Provided 00:25:16.976 Per-Namespace SMART Log: Yes 00:25:16.976 Asymmetric Namespace Access Log Page: Not Supported 00:25:16.976 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:25:16.976 Command Effects Log Page: Supported 00:25:16.976 Get Log Page Extended Data: Supported 00:25:16.976 Telemetry Log Pages: Not Supported 00:25:16.976 Persistent Event Log Pages: Not Supported 00:25:16.976 Supported Log Pages Log Page: May Support 00:25:16.976 Commands Supported & Effects Log Page: Not Supported 00:25:16.976 Feature Identifiers & Effects Log Page:May Support 00:25:16.976 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.976 Data Area 4 for Telemetry Log: Not Supported 00:25:16.976 Error Log Page Entries Supported: 1 00:25:16.976 Keep Alive: Not Supported 00:25:16.976 00:25:16.976 NVM Command Set Attributes 00:25:16.976 ========================== 00:25:16.976 Submission Queue Entry Size 00:25:16.976 Max: 64 00:25:16.976 Min: 64 00:25:16.976 Completion Queue Entry Size 00:25:16.976 Max: 16 00:25:16.976 Min: 16 00:25:16.976 Number of Namespaces: 256 00:25:16.976 Compare Command: Supported 00:25:16.976 Write Uncorrectable Command: Not Supported 00:25:16.976 Dataset Management Command: Supported 00:25:16.976 Write Zeroes Command: Supported 00:25:16.976 Set Features Save Field: Supported 00:25:16.976 Reservations: Not Supported 00:25:16.976 Timestamp: Supported 00:25:16.976 Copy: Supported 00:25:16.976 Volatile Write Cache: Present 00:25:16.976 Atomic Write Unit (Normal): 1 00:25:16.976 Atomic Write Unit (PFail): 1 00:25:16.976 Atomic Compare & Write Unit: 1 00:25:16.976 Fused Compare & Write: Not Supported 00:25:16.976 Scatter-Gather List 00:25:16.976 SGL Command Set: Supported 00:25:16.976 SGL Keyed: Not Supported 00:25:16.976 SGL Bit Bucket Descriptor: Not Supported 00:25:16.976 SGL Metadata Pointer: Not Supported 00:25:16.976 Oversized SGL: Not Supported 00:25:16.976 SGL Metadata Address: Not Supported 00:25:16.976 SGL Offset: Not Supported 00:25:16.976 Transport SGL Data Block: Not Supported 00:25:16.976 Replay Protected Memory Block: Not Supported 00:25:16.976 00:25:16.976 Firmware Slot Information 00:25:16.976 ========================= 00:25:16.976 Active slot: 1 00:25:16.976 Slot 1 Firmware Revision: 1.0 00:25:16.976 00:25:16.976 00:25:16.976 Commands Supported and Effects 00:25:16.976 ============================== 00:25:16.976 Admin Commands 00:25:16.976 -------------- 00:25:16.976 Delete I/O Submission Queue (00h): Supported 00:25:16.976 Create I/O Submission Queue (01h): Supported 00:25:16.976 Get Log Page (02h): Supported 00:25:16.976 Delete I/O Completion Queue (04h): Supported 00:25:16.976 Create I/O Completion Queue (05h): Supported 00:25:16.976 Identify (06h): Supported 00:25:16.976 Abort (08h): Supported 00:25:16.976 Set Features (09h): Supported 00:25:16.976 Get Features (0Ah): Supported 00:25:16.976 Asynchronous Event Request (0Ch): Supported 00:25:16.976 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:16.976 Directive Send (19h): Supported 00:25:16.976 Directive Receive (1Ah): Supported 00:25:16.976 Virtualization Management (1Ch): Supported 00:25:16.977 Doorbell Buffer Config (7Ch): Supported 00:25:16.977 Format NVM (80h): Supported LBA-Change 00:25:16.977 I/O Commands 00:25:16.977 ------------ 00:25:16.977 Flush (00h): Supported LBA-Change 00:25:16.977 Write (01h): Supported LBA-Change 00:25:16.977 Read (02h): Supported 00:25:16.977 Compare (05h): Supported 00:25:16.977 Write Zeroes (08h): Supported LBA-Change 00:25:16.977 Dataset Management (09h): Supported LBA-Change 00:25:16.977 Unknown (0Ch): Supported 00:25:16.977 Unknown (12h): Supported 00:25:16.977 Copy (19h): Supported LBA-Change 00:25:16.977 Unknown (1Dh): Supported LBA-Change 00:25:16.977 00:25:16.977 Error Log 00:25:16.977 ========= 00:25:16.977 00:25:16.977 Arbitration 00:25:16.977 =========== 00:25:16.977 Arbitration Burst: no limit 00:25:16.977 00:25:16.977 Power Management 00:25:16.977 ================ 00:25:16.977 Number of Power States: 1 00:25:16.977 Current Power State: Power State #0 00:25:16.977 Power State #0: 00:25:16.977 Max Power: 25.00 W 00:25:16.977 Non-Operational State: Operational 00:25:16.977 Entry Latency: 16 microseconds 00:25:16.977 Exit Latency: 4 microseconds 00:25:16.977 Relative Read Throughput: 0 00:25:16.977 Relative Read Latency: 0 00:25:16.977 Relative Write Throughput: 0 00:25:16.977 Relative Write Latency: 0 00:25:16.977 Idle Power: Not Reported 00:25:16.977 Active Power: Not Reported 00:25:16.977 Non-Operational Permissive Mode: Not Supported 00:25:16.977 00:25:16.977 Health Information 00:25:16.977 ================== 00:25:16.977 Critical Warnings: 00:25:16.977 Available Spare Space: OK 00:25:16.977 Temperature: OK 00:25:16.977 Device Reliability: OK 00:25:16.977 Read Only: No 00:25:16.977 Volatile Memory Backup: OK 00:25:16.977 Current Temperature: 323 Kelvin (50 Celsius) 00:25:16.977 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:16.977 Available Spare: 0% 00:25:16.977 Available Spare Threshold: 0% 00:25:16.977 Life Percentage Used: 0% 00:25:16.977 Data Units Read: 826 00:25:16.977 Data Units Written: 755 00:25:16.977 Host Read Commands: 34967 00:25:16.977 Host Write Commands: 34390 00:25:16.977 Controller Busy Time: 0 minutes 00:25:16.977 Power Cycles: 0 00:25:16.977 Power On Hours: 0 hours 00:25:16.977 Unsafe Shutdowns: 0 00:25:16.977 Unrecoverable Media Errors: 0 00:25:16.977 Lifetime Error Log Entries: 0 00:25:16.977 Warning Temperature Time: 0 minutes 00:25:16.977 Critical Temperature Time: 0 minutes 00:25:16.977 00:25:16.977 Number of Queues 00:25:16.977 ================ 00:25:16.977 Number of I/O Submission Queues: 64 00:25:16.977 Number of I/O Completion Queues: 64 00:25:16.977 00:25:16.977 ZNS Specific Controller Data 00:25:16.977 ============================ 00:25:16.977 Zone Append Size Limit: 0 00:25:16.977 00:25:16.977 00:25:16.977 Active Namespaces 00:25:16.977 ================= 00:25:16.977 Namespace ID:1 00:25:16.977 Error Recovery Timeout: Unlimited 00:25:16.977 Command Set Identifier: NVM (00h) 00:25:16.977 Deallocate: Supported 00:25:16.977 Deallocated/Unwritten Error: Supported 00:25:16.977 Deallocated Read Value: All 0x00 00:25:16.977 Deallocate in Write Zeroes: Not Supported 00:25:16.977 Deallocated Guard Field: 0xFFFF 00:25:16.977 Flush: Supported 00:25:16.977 Reservation: Not Supported 00:25:16.977 Namespace Sharing Capabilities: Multiple Controllers 00:25:16.977 Size (in LBAs): 262144 (1GiB) 00:25:16.977 Capacity (in LBAs): 262144 (1GiB) 00:25:16.977 Utilization (in LBAs): 262144 (1GiB) 00:25:16.977 Thin Provisioning: Not Supported 00:25:16.977 Per-NS Atomic Units: No 00:25:16.977 Maximum Single Source Range Length: 128 00:25:16.977 Maximum Copy Length: 128 00:25:16.977 Maximum Source Range Count: 128 00:25:16.977 NGUID/EUI64 Never Reused: No 00:25:16.977 Namespace Write Protected: No 00:25:16.977 Endurance group ID: 1 00:25:16.977 Number of LBA Formats: 8 00:25:16.977 Current LBA Format: LBA Format #04 00:25:16.977 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.977 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.977 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.977 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.977 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.977 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.977 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.977 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.977 00:25:16.977 Get Feature FDP: 00:25:16.977 ================ 00:25:16.977 Enabled: Yes 00:25:16.977 FDP configuration index: 0 00:25:16.977 00:25:16.977 FDP configurations log page 00:25:16.977 =========================== 00:25:16.977 Number of FDP configurations: 1 00:25:16.977 Version: 0 00:25:16.977 Size: 112 00:25:16.977 FDP Configuration Descriptor: 0 00:25:16.977 Descriptor Size: 96 00:25:16.977 Reclaim Group Identifier format: 2 00:25:16.977 FDP Volatile Write Cache: Not Present 00:25:16.977 FDP Configuration: Valid 00:25:16.977 Vendor Specific Size: 0 00:25:16.977 Number of Reclaim Groups: 2 00:25:16.977 Number of Recalim Unit Handles: 8 00:25:16.977 Max Placement Identifiers: 128 00:25:16.977 Number of Namespaces Suppprted: 256 00:25:16.977 Reclaim unit Nominal Size: 6000000 bytes 00:25:16.977 Estimated Reclaim Unit Time Limit: Not Reported 00:25:16.977 RUH Desc #000: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #001: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #002: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #003: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #004: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #005: RUH Type: Initially Isolated 00:25:16.977 RUH Desc #006: RUH Type: Initially Isolated 00:25:16.978 RUH Desc #007: RUH Type: Initially Isolated 00:25:16.978 00:25:16.978 FDP reclaim unit handle usage log page 00:25:16.978 ====================================== 00:25:16.978 Number of Reclaim Unit Handles: 8 00:25:16.978 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:25:16.978 RUH Usage Desc #001: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #002: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #003: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #004: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #005: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #006: RUH Attributes: Unused 00:25:16.978 RUH Usage Desc #007: RUH Attributes: Unused 00:25:16.978 00:25:16.978 FDP statistics log page 00:25:16.978 ======================= 00:25:16.978 Host bytes with metadata written: 477863936 00:25:16.978 Medi[2024-11-20 13:45:24.554256] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64559 terminated unexpected 00:25:16.978 a bytes with metadata written: 477908992 00:25:16.978 Media bytes erased: 0 00:25:16.978 00:25:16.978 FDP events log page 00:25:16.978 =================== 00:25:16.978 Number of FDP events: 0 00:25:16.978 00:25:16.978 NVM Specific Namespace Data 00:25:16.978 =========================== 00:25:16.978 Logical Block Storage Tag Mask: 0 00:25:16.978 Protection Information Capabilities: 00:25:16.978 16b Guard Protection Information Storage Tag Support: No 00:25:16.978 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.978 Storage Tag Check Read Support: No 00:25:16.978 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.978 ===================================================== 00:25:16.978 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:25:16.978 ===================================================== 00:25:16.978 Controller Capabilities/Features 00:25:16.978 ================================ 00:25:16.978 Vendor ID: 1b36 00:25:16.978 Subsystem Vendor ID: 1af4 00:25:16.978 Serial Number: 12342 00:25:16.978 Model Number: QEMU NVMe Ctrl 00:25:16.978 Firmware Version: 8.0.0 00:25:16.978 Recommended Arb Burst: 6 00:25:16.978 IEEE OUI Identifier: 00 54 52 00:25:16.978 Multi-path I/O 00:25:16.978 May have multiple subsystem ports: No 00:25:16.978 May have multiple controllers: No 00:25:16.978 Associated with SR-IOV VF: No 00:25:16.978 Max Data Transfer Size: 524288 00:25:16.978 Max Number of Namespaces: 256 00:25:16.978 Max Number of I/O Queues: 64 00:25:16.978 NVMe Specification Version (VS): 1.4 00:25:16.978 NVMe Specification Version (Identify): 1.4 00:25:16.978 Maximum Queue Entries: 2048 00:25:16.978 Contiguous Queues Required: Yes 00:25:16.978 Arbitration Mechanisms Supported 00:25:16.978 Weighted Round Robin: Not Supported 00:25:16.978 Vendor Specific: Not Supported 00:25:16.978 Reset Timeout: 7500 ms 00:25:16.978 Doorbell Stride: 4 bytes 00:25:16.978 NVM Subsystem Reset: Not Supported 00:25:16.978 Command Sets Supported 00:25:16.978 NVM Command Set: Supported 00:25:16.978 Boot Partition: Not Supported 00:25:16.978 Memory Page Size Minimum: 4096 bytes 00:25:16.978 Memory Page Size Maximum: 65536 bytes 00:25:16.978 Persistent Memory Region: Not Supported 00:25:16.978 Optional Asynchronous Events Supported 00:25:16.978 Namespace Attribute Notices: Supported 00:25:16.978 Firmware Activation Notices: Not Supported 00:25:16.978 ANA Change Notices: Not Supported 00:25:16.978 PLE Aggregate Log Change Notices: Not Supported 00:25:16.978 LBA Status Info Alert Notices: Not Supported 00:25:16.978 EGE Aggregate Log Change Notices: Not Supported 00:25:16.978 Normal NVM Subsystem Shutdown event: Not Supported 00:25:16.978 Zone Descriptor Change Notices: Not Supported 00:25:16.978 Discovery Log Change Notices: Not Supported 00:25:16.978 Controller Attributes 00:25:16.978 128-bit Host Identifier: Not Supported 00:25:16.978 Non-Operational Permissive Mode: Not Supported 00:25:16.978 NVM Sets: Not Supported 00:25:16.978 Read Recovery Levels: Not Supported 00:25:16.978 Endurance Groups: Not Supported 00:25:16.978 Predictable Latency Mode: Not Supported 00:25:16.978 Traffic Based Keep ALive: Not Supported 00:25:16.978 Namespace Granularity: Not Supported 00:25:16.978 SQ Associations: Not Supported 00:25:16.978 UUID List: Not Supported 00:25:16.978 Multi-Domain Subsystem: Not Supported 00:25:16.978 Fixed Capacity Management: Not Supported 00:25:16.978 Variable Capacity Management: Not Supported 00:25:16.978 Delete Endurance Group: Not Supported 00:25:16.978 Delete NVM Set: Not Supported 00:25:16.978 Extended LBA Formats Supported: Supported 00:25:16.978 Flexible Data Placement Supported: Not Supported 00:25:16.978 00:25:16.978 Controller Memory Buffer Support 00:25:16.978 ================================ 00:25:16.978 Supported: No 00:25:16.978 00:25:16.978 Persistent Memory Region Support 00:25:16.978 ================================ 00:25:16.978 Supported: No 00:25:16.978 00:25:16.978 Admin Command Set Attributes 00:25:16.978 ============================ 00:25:16.978 Security Send/Receive: Not Supported 00:25:16.978 Format NVM: Supported 00:25:16.978 Firmware Activate/Download: Not Supported 00:25:16.978 Namespace Management: Supported 00:25:16.978 Device Self-Test: Not Supported 00:25:16.978 Directives: Supported 00:25:16.978 NVMe-MI: Not Supported 00:25:16.978 Virtualization Management: Not Supported 00:25:16.978 Doorbell Buffer Config: Supported 00:25:16.978 Get LBA Status Capability: Not Supported 00:25:16.978 Command & Feature Lockdown Capability: Not Supported 00:25:16.978 Abort Command Limit: 4 00:25:16.978 Async Event Request Limit: 4 00:25:16.978 Number of Firmware Slots: N/A 00:25:16.978 Firmware Slot 1 Read-Only: N/A 00:25:16.978 Firmware Activation Without Reset: N/A 00:25:16.978 Multiple Update Detection Support: N/A 00:25:16.978 Firmware Update Granularity: No Information Provided 00:25:16.978 Per-Namespace SMART Log: Yes 00:25:16.978 Asymmetric Namespace Access Log Page: Not Supported 00:25:16.978 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:25:16.978 Command Effects Log Page: Supported 00:25:16.978 Get Log Page Extended Data: Supported 00:25:16.978 Telemetry Log Pages: Not Supported 00:25:16.978 Persistent Event Log Pages: Not Supported 00:25:16.978 Supported Log Pages Log Page: May Support 00:25:16.978 Commands Supported & Effects Log Page: Not Supported 00:25:16.978 Feature Identifiers & Effects Log Page:May Support 00:25:16.978 NVMe-MI Commands & Effects Log Page: May Support 00:25:16.978 Data Area 4 for Telemetry Log: Not Supported 00:25:16.978 Error Log Page Entries Supported: 1 00:25:16.978 Keep Alive: Not Supported 00:25:16.978 00:25:16.978 NVM Command Set Attributes 00:25:16.978 ========================== 00:25:16.978 Submission Queue Entry Size 00:25:16.978 Max: 64 00:25:16.978 Min: 64 00:25:16.978 Completion Queue Entry Size 00:25:16.978 Max: 16 00:25:16.978 Min: 16 00:25:16.978 Number of Namespaces: 256 00:25:16.978 Compare Command: Supported 00:25:16.978 Write Uncorrectable Command: Not Supported 00:25:16.978 Dataset Management Command: Supported 00:25:16.978 Write Zeroes Command: Supported 00:25:16.978 Set Features Save Field: Supported 00:25:16.978 Reservations: Not Supported 00:25:16.978 Timestamp: Supported 00:25:16.978 Copy: Supported 00:25:16.978 Volatile Write Cache: Present 00:25:16.978 Atomic Write Unit (Normal): 1 00:25:16.978 Atomic Write Unit (PFail): 1 00:25:16.978 Atomic Compare & Write Unit: 1 00:25:16.978 Fused Compare & Write: Not Supported 00:25:16.978 Scatter-Gather List 00:25:16.978 SGL Command Set: Supported 00:25:16.978 SGL Keyed: Not Supported 00:25:16.978 SGL Bit Bucket Descriptor: Not Supported 00:25:16.978 SGL Metadata Pointer: Not Supported 00:25:16.978 Oversized SGL: Not Supported 00:25:16.979 SGL Metadata Address: Not Supported 00:25:16.979 SGL Offset: Not Supported 00:25:16.979 Transport SGL Data Block: Not Supported 00:25:16.979 Replay Protected Memory Block: Not Supported 00:25:16.979 00:25:16.979 Firmware Slot Information 00:25:16.979 ========================= 00:25:16.979 Active slot: 1 00:25:16.979 Slot 1 Firmware Revision: 1.0 00:25:16.979 00:25:16.979 00:25:16.979 Commands Supported and Effects 00:25:16.979 ============================== 00:25:16.979 Admin Commands 00:25:16.979 -------------- 00:25:16.979 Delete I/O Submission Queue (00h): Supported 00:25:16.979 Create I/O Submission Queue (01h): Supported 00:25:16.979 Get Log Page (02h): Supported 00:25:16.979 Delete I/O Completion Queue (04h): Supported 00:25:16.979 Create I/O Completion Queue (05h): Supported 00:25:16.979 Identify (06h): Supported 00:25:16.979 Abort (08h): Supported 00:25:16.979 Set Features (09h): Supported 00:25:16.979 Get Features (0Ah): Supported 00:25:16.979 Asynchronous Event Request (0Ch): Supported 00:25:16.979 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:16.979 Directive Send (19h): Supported 00:25:16.979 Directive Receive (1Ah): Supported 00:25:16.979 Virtualization Management (1Ch): Supported 00:25:16.979 Doorbell Buffer Config (7Ch): Supported 00:25:16.979 Format NVM (80h): Supported LBA-Change 00:25:16.979 I/O Commands 00:25:16.979 ------------ 00:25:16.979 Flush (00h): Supported LBA-Change 00:25:16.979 Write (01h): Supported LBA-Change 00:25:16.979 Read (02h): Supported 00:25:16.979 Compare (05h): Supported 00:25:16.979 Write Zeroes (08h): Supported LBA-Change 00:25:16.979 Dataset Management (09h): Supported LBA-Change 00:25:16.979 Unknown (0Ch): Supported 00:25:16.979 Unknown (12h): Supported 00:25:16.979 Copy (19h): Supported LBA-Change 00:25:16.979 Unknown (1Dh): Supported LBA-Change 00:25:16.979 00:25:16.979 Error Log 00:25:16.979 ========= 00:25:16.979 00:25:16.979 Arbitration 00:25:16.979 =========== 00:25:16.979 Arbitration Burst: no limit 00:25:16.979 00:25:16.979 Power Management 00:25:16.979 ================ 00:25:16.979 Number of Power States: 1 00:25:16.979 Current Power State: Power State #0 00:25:16.979 Power State #0: 00:25:16.979 Max Power: 25.00 W 00:25:16.979 Non-Operational State: Operational 00:25:16.979 Entry Latency: 16 microseconds 00:25:16.979 Exit Latency: 4 microseconds 00:25:16.979 Relative Read Throughput: 0 00:25:16.979 Relative Read Latency: 0 00:25:16.979 Relative Write Throughput: 0 00:25:16.979 Relative Write Latency: 0 00:25:16.979 Idle Power: Not Reported 00:25:16.979 Active Power: Not Reported 00:25:16.979 Non-Operational Permissive Mode: Not Supported 00:25:16.979 00:25:16.979 Health Information 00:25:16.979 ================== 00:25:16.979 Critical Warnings: 00:25:16.979 Available Spare Space: OK 00:25:16.979 Temperature: OK 00:25:16.979 Device Reliability: OK 00:25:16.979 Read Only: No 00:25:16.979 Volatile Memory Backup: OK 00:25:16.979 Current Temperature: 323 Kelvin (50 Celsius) 00:25:16.979 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:16.979 Available Spare: 0% 00:25:16.979 Available Spare Threshold: 0% 00:25:16.979 Life Percentage Used: 0% 00:25:16.979 Data Units Read: 2219 00:25:16.979 Data Units Written: 2006 00:25:16.979 Host Read Commands: 102777 00:25:16.979 Host Write Commands: 101046 00:25:16.979 Controller Busy Time: 0 minutes 00:25:16.979 Power Cycles: 0 00:25:16.979 Power On Hours: 0 hours 00:25:16.979 Unsafe Shutdowns: 0 00:25:16.979 Unrecoverable Media Errors: 0 00:25:16.979 Lifetime Error Log Entries: 0 00:25:16.979 Warning Temperature Time: 0 minutes 00:25:16.979 Critical Temperature Time: 0 minutes 00:25:16.979 00:25:16.979 Number of Queues 00:25:16.979 ================ 00:25:16.979 Number of I/O Submission Queues: 64 00:25:16.979 Number of I/O Completion Queues: 64 00:25:16.979 00:25:16.979 ZNS Specific Controller Data 00:25:16.979 ============================ 00:25:16.979 Zone Append Size Limit: 0 00:25:16.979 00:25:16.979 00:25:16.979 Active Namespaces 00:25:16.979 ================= 00:25:16.979 Namespace ID:1 00:25:16.979 Error Recovery Timeout: Unlimited 00:25:16.979 Command Set Identifier: NVM (00h) 00:25:16.979 Deallocate: Supported 00:25:16.979 Deallocated/Unwritten Error: Supported 00:25:16.979 Deallocated Read Value: All 0x00 00:25:16.979 Deallocate in Write Zeroes: Not Supported 00:25:16.979 Deallocated Guard Field: 0xFFFF 00:25:16.979 Flush: Supported 00:25:16.979 Reservation: Not Supported 00:25:16.979 Namespace Sharing Capabilities: Private 00:25:16.979 Size (in LBAs): 1048576 (4GiB) 00:25:16.979 Capacity (in LBAs): 1048576 (4GiB) 00:25:16.979 Utilization (in LBAs): 1048576 (4GiB) 00:25:16.979 Thin Provisioning: Not Supported 00:25:16.979 Per-NS Atomic Units: No 00:25:16.979 Maximum Single Source Range Length: 128 00:25:16.979 Maximum Copy Length: 128 00:25:16.979 Maximum Source Range Count: 128 00:25:16.979 NGUID/EUI64 Never Reused: No 00:25:16.979 Namespace Write Protected: No 00:25:16.979 Number of LBA Formats: 8 00:25:16.979 Current LBA Format: LBA Format #04 00:25:16.979 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.979 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.979 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.979 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.979 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.979 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.979 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.979 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.979 00:25:16.979 NVM Specific Namespace Data 00:25:16.979 =========================== 00:25:16.979 Logical Block Storage Tag Mask: 0 00:25:16.979 Protection Information Capabilities: 00:25:16.979 16b Guard Protection Information Storage Tag Support: No 00:25:16.979 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.979 Storage Tag Check Read Support: No 00:25:16.979 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Namespace ID:2 00:25:16.979 Error Recovery Timeout: Unlimited 00:25:16.979 Command Set Identifier: NVM (00h) 00:25:16.979 Deallocate: Supported 00:25:16.979 Deallocated/Unwritten Error: Supported 00:25:16.979 Deallocated Read Value: All 0x00 00:25:16.979 Deallocate in Write Zeroes: Not Supported 00:25:16.979 Deallocated Guard Field: 0xFFFF 00:25:16.979 Flush: Supported 00:25:16.979 Reservation: Not Supported 00:25:16.979 Namespace Sharing Capabilities: Private 00:25:16.979 Size (in LBAs): 1048576 (4GiB) 00:25:16.979 Capacity (in LBAs): 1048576 (4GiB) 00:25:16.979 Utilization (in LBAs): 1048576 (4GiB) 00:25:16.979 Thin Provisioning: Not Supported 00:25:16.979 Per-NS Atomic Units: No 00:25:16.979 Maximum Single Source Range Length: 128 00:25:16.979 Maximum Copy Length: 128 00:25:16.979 Maximum Source Range Count: 128 00:25:16.979 NGUID/EUI64 Never Reused: No 00:25:16.979 Namespace Write Protected: No 00:25:16.979 Number of LBA Formats: 8 00:25:16.979 Current LBA Format: LBA Format #04 00:25:16.979 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.979 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.979 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.979 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.979 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.979 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.979 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.979 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.979 00:25:16.979 NVM Specific Namespace Data 00:25:16.979 =========================== 00:25:16.979 Logical Block Storage Tag Mask: 0 00:25:16.979 Protection Information Capabilities: 00:25:16.979 16b Guard Protection Information Storage Tag Support: No 00:25:16.979 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.979 Storage Tag Check Read Support: No 00:25:16.979 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.979 Namespace ID:3 00:25:16.979 Error Recovery Timeout: Unlimited 00:25:16.979 Command Set Identifier: NVM (00h) 00:25:16.979 Deallocate: Supported 00:25:16.979 Deallocated/Unwritten Error: Supported 00:25:16.979 Deallocated Read Value: All 0x00 00:25:16.979 Deallocate in Write Zeroes: Not Supported 00:25:16.979 Deallocated Guard Field: 0xFFFF 00:25:16.979 Flush: Supported 00:25:16.979 Reservation: Not Supported 00:25:16.979 Namespace Sharing Capabilities: Private 00:25:16.979 Size (in LBAs): 1048576 (4GiB) 00:25:16.979 Capacity (in LBAs): 1048576 (4GiB) 00:25:16.979 Utilization (in LBAs): 1048576 (4GiB) 00:25:16.979 Thin Provisioning: Not Supported 00:25:16.979 Per-NS Atomic Units: No 00:25:16.979 Maximum Single Source Range Length: 128 00:25:16.979 Maximum Copy Length: 128 00:25:16.979 Maximum Source Range Count: 128 00:25:16.980 NGUID/EUI64 Never Reused: No 00:25:16.980 Namespace Write Protected: No 00:25:16.980 Number of LBA Formats: 8 00:25:16.980 Current LBA Format: LBA Format #04 00:25:16.980 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.980 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:16.980 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:16.980 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:16.980 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:16.980 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:16.980 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:16.980 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:16.980 00:25:16.980 NVM Specific Namespace Data 00:25:16.980 =========================== 00:25:16.980 Logical Block Storage Tag Mask: 0 00:25:16.980 Protection Information Capabilities: 00:25:16.980 16b Guard Protection Information Storage Tag Support: No 00:25:16.980 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:16.980 Storage Tag Check Read Support: No 00:25:16.980 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:16.980 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:25:16.980 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:25:17.240 ===================================================== 00:25:17.240 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:17.240 ===================================================== 00:25:17.240 Controller Capabilities/Features 00:25:17.240 ================================ 00:25:17.240 Vendor ID: 1b36 00:25:17.240 Subsystem Vendor ID: 1af4 00:25:17.240 Serial Number: 12340 00:25:17.240 Model Number: QEMU NVMe Ctrl 00:25:17.240 Firmware Version: 8.0.0 00:25:17.240 Recommended Arb Burst: 6 00:25:17.240 IEEE OUI Identifier: 00 54 52 00:25:17.240 Multi-path I/O 00:25:17.240 May have multiple subsystem ports: No 00:25:17.240 May have multiple controllers: No 00:25:17.240 Associated with SR-IOV VF: No 00:25:17.240 Max Data Transfer Size: 524288 00:25:17.240 Max Number of Namespaces: 256 00:25:17.240 Max Number of I/O Queues: 64 00:25:17.240 NVMe Specification Version (VS): 1.4 00:25:17.240 NVMe Specification Version (Identify): 1.4 00:25:17.240 Maximum Queue Entries: 2048 00:25:17.240 Contiguous Queues Required: Yes 00:25:17.240 Arbitration Mechanisms Supported 00:25:17.240 Weighted Round Robin: Not Supported 00:25:17.240 Vendor Specific: Not Supported 00:25:17.240 Reset Timeout: 7500 ms 00:25:17.240 Doorbell Stride: 4 bytes 00:25:17.240 NVM Subsystem Reset: Not Supported 00:25:17.240 Command Sets Supported 00:25:17.240 NVM Command Set: Supported 00:25:17.240 Boot Partition: Not Supported 00:25:17.240 Memory Page Size Minimum: 4096 bytes 00:25:17.240 Memory Page Size Maximum: 65536 bytes 00:25:17.240 Persistent Memory Region: Not Supported 00:25:17.240 Optional Asynchronous Events Supported 00:25:17.240 Namespace Attribute Notices: Supported 00:25:17.240 Firmware Activation Notices: Not Supported 00:25:17.240 ANA Change Notices: Not Supported 00:25:17.240 PLE Aggregate Log Change Notices: Not Supported 00:25:17.240 LBA Status Info Alert Notices: Not Supported 00:25:17.240 EGE Aggregate Log Change Notices: Not Supported 00:25:17.240 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.240 Zone Descriptor Change Notices: Not Supported 00:25:17.240 Discovery Log Change Notices: Not Supported 00:25:17.240 Controller Attributes 00:25:17.240 128-bit Host Identifier: Not Supported 00:25:17.240 Non-Operational Permissive Mode: Not Supported 00:25:17.240 NVM Sets: Not Supported 00:25:17.240 Read Recovery Levels: Not Supported 00:25:17.240 Endurance Groups: Not Supported 00:25:17.240 Predictable Latency Mode: Not Supported 00:25:17.240 Traffic Based Keep ALive: Not Supported 00:25:17.240 Namespace Granularity: Not Supported 00:25:17.240 SQ Associations: Not Supported 00:25:17.240 UUID List: Not Supported 00:25:17.240 Multi-Domain Subsystem: Not Supported 00:25:17.240 Fixed Capacity Management: Not Supported 00:25:17.240 Variable Capacity Management: Not Supported 00:25:17.240 Delete Endurance Group: Not Supported 00:25:17.240 Delete NVM Set: Not Supported 00:25:17.240 Extended LBA Formats Supported: Supported 00:25:17.240 Flexible Data Placement Supported: Not Supported 00:25:17.240 00:25:17.240 Controller Memory Buffer Support 00:25:17.240 ================================ 00:25:17.240 Supported: No 00:25:17.240 00:25:17.240 Persistent Memory Region Support 00:25:17.240 ================================ 00:25:17.240 Supported: No 00:25:17.240 00:25:17.240 Admin Command Set Attributes 00:25:17.240 ============================ 00:25:17.240 Security Send/Receive: Not Supported 00:25:17.240 Format NVM: Supported 00:25:17.240 Firmware Activate/Download: Not Supported 00:25:17.240 Namespace Management: Supported 00:25:17.240 Device Self-Test: Not Supported 00:25:17.240 Directives: Supported 00:25:17.240 NVMe-MI: Not Supported 00:25:17.240 Virtualization Management: Not Supported 00:25:17.240 Doorbell Buffer Config: Supported 00:25:17.240 Get LBA Status Capability: Not Supported 00:25:17.240 Command & Feature Lockdown Capability: Not Supported 00:25:17.240 Abort Command Limit: 4 00:25:17.240 Async Event Request Limit: 4 00:25:17.240 Number of Firmware Slots: N/A 00:25:17.240 Firmware Slot 1 Read-Only: N/A 00:25:17.240 Firmware Activation Without Reset: N/A 00:25:17.240 Multiple Update Detection Support: N/A 00:25:17.240 Firmware Update Granularity: No Information Provided 00:25:17.240 Per-Namespace SMART Log: Yes 00:25:17.240 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.240 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:25:17.240 Command Effects Log Page: Supported 00:25:17.240 Get Log Page Extended Data: Supported 00:25:17.240 Telemetry Log Pages: Not Supported 00:25:17.240 Persistent Event Log Pages: Not Supported 00:25:17.240 Supported Log Pages Log Page: May Support 00:25:17.240 Commands Supported & Effects Log Page: Not Supported 00:25:17.240 Feature Identifiers & Effects Log Page:May Support 00:25:17.240 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.240 Data Area 4 for Telemetry Log: Not Supported 00:25:17.240 Error Log Page Entries Supported: 1 00:25:17.240 Keep Alive: Not Supported 00:25:17.240 00:25:17.240 NVM Command Set Attributes 00:25:17.240 ========================== 00:25:17.240 Submission Queue Entry Size 00:25:17.240 Max: 64 00:25:17.240 Min: 64 00:25:17.240 Completion Queue Entry Size 00:25:17.240 Max: 16 00:25:17.240 Min: 16 00:25:17.240 Number of Namespaces: 256 00:25:17.240 Compare Command: Supported 00:25:17.240 Write Uncorrectable Command: Not Supported 00:25:17.240 Dataset Management Command: Supported 00:25:17.240 Write Zeroes Command: Supported 00:25:17.240 Set Features Save Field: Supported 00:25:17.240 Reservations: Not Supported 00:25:17.240 Timestamp: Supported 00:25:17.240 Copy: Supported 00:25:17.240 Volatile Write Cache: Present 00:25:17.240 Atomic Write Unit (Normal): 1 00:25:17.240 Atomic Write Unit (PFail): 1 00:25:17.240 Atomic Compare & Write Unit: 1 00:25:17.240 Fused Compare & Write: Not Supported 00:25:17.240 Scatter-Gather List 00:25:17.240 SGL Command Set: Supported 00:25:17.240 SGL Keyed: Not Supported 00:25:17.240 SGL Bit Bucket Descriptor: Not Supported 00:25:17.240 SGL Metadata Pointer: Not Supported 00:25:17.240 Oversized SGL: Not Supported 00:25:17.240 SGL Metadata Address: Not Supported 00:25:17.240 SGL Offset: Not Supported 00:25:17.240 Transport SGL Data Block: Not Supported 00:25:17.240 Replay Protected Memory Block: Not Supported 00:25:17.240 00:25:17.240 Firmware Slot Information 00:25:17.240 ========================= 00:25:17.240 Active slot: 1 00:25:17.240 Slot 1 Firmware Revision: 1.0 00:25:17.240 00:25:17.240 00:25:17.240 Commands Supported and Effects 00:25:17.240 ============================== 00:25:17.240 Admin Commands 00:25:17.240 -------------- 00:25:17.240 Delete I/O Submission Queue (00h): Supported 00:25:17.240 Create I/O Submission Queue (01h): Supported 00:25:17.240 Get Log Page (02h): Supported 00:25:17.240 Delete I/O Completion Queue (04h): Supported 00:25:17.240 Create I/O Completion Queue (05h): Supported 00:25:17.240 Identify (06h): Supported 00:25:17.240 Abort (08h): Supported 00:25:17.240 Set Features (09h): Supported 00:25:17.241 Get Features (0Ah): Supported 00:25:17.241 Asynchronous Event Request (0Ch): Supported 00:25:17.241 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:17.241 Directive Send (19h): Supported 00:25:17.241 Directive Receive (1Ah): Supported 00:25:17.241 Virtualization Management (1Ch): Supported 00:25:17.241 Doorbell Buffer Config (7Ch): Supported 00:25:17.241 Format NVM (80h): Supported LBA-Change 00:25:17.241 I/O Commands 00:25:17.241 ------------ 00:25:17.241 Flush (00h): Supported LBA-Change 00:25:17.241 Write (01h): Supported LBA-Change 00:25:17.241 Read (02h): Supported 00:25:17.241 Compare (05h): Supported 00:25:17.241 Write Zeroes (08h): Supported LBA-Change 00:25:17.241 Dataset Management (09h): Supported LBA-Change 00:25:17.241 Unknown (0Ch): Supported 00:25:17.241 Unknown (12h): Supported 00:25:17.241 Copy (19h): Supported LBA-Change 00:25:17.241 Unknown (1Dh): Supported LBA-Change 00:25:17.241 00:25:17.241 Error Log 00:25:17.241 ========= 00:25:17.241 00:25:17.241 Arbitration 00:25:17.241 =========== 00:25:17.241 Arbitration Burst: no limit 00:25:17.241 00:25:17.241 Power Management 00:25:17.241 ================ 00:25:17.241 Number of Power States: 1 00:25:17.241 Current Power State: Power State #0 00:25:17.241 Power State #0: 00:25:17.241 Max Power: 25.00 W 00:25:17.241 Non-Operational State: Operational 00:25:17.241 Entry Latency: 16 microseconds 00:25:17.241 Exit Latency: 4 microseconds 00:25:17.241 Relative Read Throughput: 0 00:25:17.241 Relative Read Latency: 0 00:25:17.241 Relative Write Throughput: 0 00:25:17.241 Relative Write Latency: 0 00:25:17.241 Idle Power: Not Reported 00:25:17.241 Active Power: Not Reported 00:25:17.241 Non-Operational Permissive Mode: Not Supported 00:25:17.241 00:25:17.241 Health Information 00:25:17.241 ================== 00:25:17.241 Critical Warnings: 00:25:17.241 Available Spare Space: OK 00:25:17.241 Temperature: OK 00:25:17.241 Device Reliability: OK 00:25:17.241 Read Only: No 00:25:17.241 Volatile Memory Backup: OK 00:25:17.241 Current Temperature: 323 Kelvin (50 Celsius) 00:25:17.241 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:17.241 Available Spare: 0% 00:25:17.241 Available Spare Threshold: 0% 00:25:17.241 Life Percentage Used: 0% 00:25:17.241 Data Units Read: 698 00:25:17.241 Data Units Written: 626 00:25:17.241 Host Read Commands: 33728 00:25:17.241 Host Write Commands: 33514 00:25:17.241 Controller Busy Time: 0 minutes 00:25:17.241 Power Cycles: 0 00:25:17.241 Power On Hours: 0 hours 00:25:17.241 Unsafe Shutdowns: 0 00:25:17.241 Unrecoverable Media Errors: 0 00:25:17.241 Lifetime Error Log Entries: 0 00:25:17.241 Warning Temperature Time: 0 minutes 00:25:17.241 Critical Temperature Time: 0 minutes 00:25:17.241 00:25:17.241 Number of Queues 00:25:17.241 ================ 00:25:17.241 Number of I/O Submission Queues: 64 00:25:17.241 Number of I/O Completion Queues: 64 00:25:17.241 00:25:17.241 ZNS Specific Controller Data 00:25:17.241 ============================ 00:25:17.241 Zone Append Size Limit: 0 00:25:17.241 00:25:17.241 00:25:17.241 Active Namespaces 00:25:17.241 ================= 00:25:17.241 Namespace ID:1 00:25:17.241 Error Recovery Timeout: Unlimited 00:25:17.241 Command Set Identifier: NVM (00h) 00:25:17.241 Deallocate: Supported 00:25:17.241 Deallocated/Unwritten Error: Supported 00:25:17.241 Deallocated Read Value: All 0x00 00:25:17.241 Deallocate in Write Zeroes: Not Supported 00:25:17.241 Deallocated Guard Field: 0xFFFF 00:25:17.241 Flush: Supported 00:25:17.241 Reservation: Not Supported 00:25:17.241 Metadata Transferred as: Separate Metadata Buffer 00:25:17.241 Namespace Sharing Capabilities: Private 00:25:17.241 Size (in LBAs): 1548666 (5GiB) 00:25:17.241 Capacity (in LBAs): 1548666 (5GiB) 00:25:17.241 Utilization (in LBAs): 1548666 (5GiB) 00:25:17.241 Thin Provisioning: Not Supported 00:25:17.241 Per-NS Atomic Units: No 00:25:17.241 Maximum Single Source Range Length: 128 00:25:17.241 Maximum Copy Length: 128 00:25:17.241 Maximum Source Range Count: 128 00:25:17.241 NGUID/EUI64 Never Reused: No 00:25:17.241 Namespace Write Protected: No 00:25:17.241 Number of LBA Formats: 8 00:25:17.241 Current LBA Format: LBA Format #07 00:25:17.241 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.241 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:17.241 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:17.241 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:17.241 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:17.241 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:17.241 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:17.241 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:17.241 00:25:17.241 NVM Specific Namespace Data 00:25:17.241 =========================== 00:25:17.241 Logical Block Storage Tag Mask: 0 00:25:17.241 Protection Information Capabilities: 00:25:17.241 16b Guard Protection Information Storage Tag Support: No 00:25:17.241 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:17.241 Storage Tag Check Read Support: No 00:25:17.241 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.241 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:25:17.241 13:45:24 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:25:17.500 ===================================================== 00:25:17.500 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:25:17.500 ===================================================== 00:25:17.500 Controller Capabilities/Features 00:25:17.500 ================================ 00:25:17.500 Vendor ID: 1b36 00:25:17.500 Subsystem Vendor ID: 1af4 00:25:17.500 Serial Number: 12341 00:25:17.500 Model Number: QEMU NVMe Ctrl 00:25:17.501 Firmware Version: 8.0.0 00:25:17.501 Recommended Arb Burst: 6 00:25:17.501 IEEE OUI Identifier: 00 54 52 00:25:17.501 Multi-path I/O 00:25:17.501 May have multiple subsystem ports: No 00:25:17.501 May have multiple controllers: No 00:25:17.501 Associated with SR-IOV VF: No 00:25:17.501 Max Data Transfer Size: 524288 00:25:17.501 Max Number of Namespaces: 256 00:25:17.501 Max Number of I/O Queues: 64 00:25:17.501 NVMe Specification Version (VS): 1.4 00:25:17.501 NVMe Specification Version (Identify): 1.4 00:25:17.501 Maximum Queue Entries: 2048 00:25:17.501 Contiguous Queues Required: Yes 00:25:17.501 Arbitration Mechanisms Supported 00:25:17.501 Weighted Round Robin: Not Supported 00:25:17.501 Vendor Specific: Not Supported 00:25:17.501 Reset Timeout: 7500 ms 00:25:17.501 Doorbell Stride: 4 bytes 00:25:17.501 NVM Subsystem Reset: Not Supported 00:25:17.501 Command Sets Supported 00:25:17.501 NVM Command Set: Supported 00:25:17.501 Boot Partition: Not Supported 00:25:17.501 Memory Page Size Minimum: 4096 bytes 00:25:17.501 Memory Page Size Maximum: 65536 bytes 00:25:17.501 Persistent Memory Region: Not Supported 00:25:17.501 Optional Asynchronous Events Supported 00:25:17.501 Namespace Attribute Notices: Supported 00:25:17.501 Firmware Activation Notices: Not Supported 00:25:17.501 ANA Change Notices: Not Supported 00:25:17.501 PLE Aggregate Log Change Notices: Not Supported 00:25:17.501 LBA Status Info Alert Notices: Not Supported 00:25:17.501 EGE Aggregate Log Change Notices: Not Supported 00:25:17.501 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.501 Zone Descriptor Change Notices: Not Supported 00:25:17.501 Discovery Log Change Notices: Not Supported 00:25:17.501 Controller Attributes 00:25:17.501 128-bit Host Identifier: Not Supported 00:25:17.501 Non-Operational Permissive Mode: Not Supported 00:25:17.501 NVM Sets: Not Supported 00:25:17.501 Read Recovery Levels: Not Supported 00:25:17.501 Endurance Groups: Not Supported 00:25:17.501 Predictable Latency Mode: Not Supported 00:25:17.501 Traffic Based Keep ALive: Not Supported 00:25:17.501 Namespace Granularity: Not Supported 00:25:17.501 SQ Associations: Not Supported 00:25:17.501 UUID List: Not Supported 00:25:17.501 Multi-Domain Subsystem: Not Supported 00:25:17.501 Fixed Capacity Management: Not Supported 00:25:17.501 Variable Capacity Management: Not Supported 00:25:17.501 Delete Endurance Group: Not Supported 00:25:17.501 Delete NVM Set: Not Supported 00:25:17.501 Extended LBA Formats Supported: Supported 00:25:17.501 Flexible Data Placement Supported: Not Supported 00:25:17.501 00:25:17.501 Controller Memory Buffer Support 00:25:17.501 ================================ 00:25:17.501 Supported: No 00:25:17.501 00:25:17.501 Persistent Memory Region Support 00:25:17.501 ================================ 00:25:17.501 Supported: No 00:25:17.501 00:25:17.501 Admin Command Set Attributes 00:25:17.501 ============================ 00:25:17.501 Security Send/Receive: Not Supported 00:25:17.501 Format NVM: Supported 00:25:17.501 Firmware Activate/Download: Not Supported 00:25:17.501 Namespace Management: Supported 00:25:17.501 Device Self-Test: Not Supported 00:25:17.501 Directives: Supported 00:25:17.501 NVMe-MI: Not Supported 00:25:17.501 Virtualization Management: Not Supported 00:25:17.501 Doorbell Buffer Config: Supported 00:25:17.501 Get LBA Status Capability: Not Supported 00:25:17.501 Command & Feature Lockdown Capability: Not Supported 00:25:17.501 Abort Command Limit: 4 00:25:17.501 Async Event Request Limit: 4 00:25:17.501 Number of Firmware Slots: N/A 00:25:17.501 Firmware Slot 1 Read-Only: N/A 00:25:17.501 Firmware Activation Without Reset: N/A 00:25:17.501 Multiple Update Detection Support: N/A 00:25:17.501 Firmware Update Granularity: No Information Provided 00:25:17.501 Per-Namespace SMART Log: Yes 00:25:17.501 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.501 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:25:17.501 Command Effects Log Page: Supported 00:25:17.501 Get Log Page Extended Data: Supported 00:25:17.501 Telemetry Log Pages: Not Supported 00:25:17.501 Persistent Event Log Pages: Not Supported 00:25:17.501 Supported Log Pages Log Page: May Support 00:25:17.501 Commands Supported & Effects Log Page: Not Supported 00:25:17.501 Feature Identifiers & Effects Log Page:May Support 00:25:17.501 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.501 Data Area 4 for Telemetry Log: Not Supported 00:25:17.501 Error Log Page Entries Supported: 1 00:25:17.501 Keep Alive: Not Supported 00:25:17.501 00:25:17.501 NVM Command Set Attributes 00:25:17.501 ========================== 00:25:17.501 Submission Queue Entry Size 00:25:17.501 Max: 64 00:25:17.501 Min: 64 00:25:17.501 Completion Queue Entry Size 00:25:17.501 Max: 16 00:25:17.501 Min: 16 00:25:17.501 Number of Namespaces: 256 00:25:17.501 Compare Command: Supported 00:25:17.501 Write Uncorrectable Command: Not Supported 00:25:17.501 Dataset Management Command: Supported 00:25:17.501 Write Zeroes Command: Supported 00:25:17.501 Set Features Save Field: Supported 00:25:17.501 Reservations: Not Supported 00:25:17.501 Timestamp: Supported 00:25:17.501 Copy: Supported 00:25:17.501 Volatile Write Cache: Present 00:25:17.501 Atomic Write Unit (Normal): 1 00:25:17.501 Atomic Write Unit (PFail): 1 00:25:17.501 Atomic Compare & Write Unit: 1 00:25:17.501 Fused Compare & Write: Not Supported 00:25:17.501 Scatter-Gather List 00:25:17.501 SGL Command Set: Supported 00:25:17.501 SGL Keyed: Not Supported 00:25:17.501 SGL Bit Bucket Descriptor: Not Supported 00:25:17.501 SGL Metadata Pointer: Not Supported 00:25:17.501 Oversized SGL: Not Supported 00:25:17.501 SGL Metadata Address: Not Supported 00:25:17.501 SGL Offset: Not Supported 00:25:17.501 Transport SGL Data Block: Not Supported 00:25:17.501 Replay Protected Memory Block: Not Supported 00:25:17.501 00:25:17.501 Firmware Slot Information 00:25:17.501 ========================= 00:25:17.501 Active slot: 1 00:25:17.501 Slot 1 Firmware Revision: 1.0 00:25:17.501 00:25:17.501 00:25:17.501 Commands Supported and Effects 00:25:17.501 ============================== 00:25:17.501 Admin Commands 00:25:17.501 -------------- 00:25:17.501 Delete I/O Submission Queue (00h): Supported 00:25:17.501 Create I/O Submission Queue (01h): Supported 00:25:17.501 Get Log Page (02h): Supported 00:25:17.501 Delete I/O Completion Queue (04h): Supported 00:25:17.501 Create I/O Completion Queue (05h): Supported 00:25:17.501 Identify (06h): Supported 00:25:17.501 Abort (08h): Supported 00:25:17.501 Set Features (09h): Supported 00:25:17.501 Get Features (0Ah): Supported 00:25:17.501 Asynchronous Event Request (0Ch): Supported 00:25:17.501 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:17.501 Directive Send (19h): Supported 00:25:17.501 Directive Receive (1Ah): Supported 00:25:17.501 Virtualization Management (1Ch): Supported 00:25:17.501 Doorbell Buffer Config (7Ch): Supported 00:25:17.501 Format NVM (80h): Supported LBA-Change 00:25:17.501 I/O Commands 00:25:17.501 ------------ 00:25:17.501 Flush (00h): Supported LBA-Change 00:25:17.501 Write (01h): Supported LBA-Change 00:25:17.501 Read (02h): Supported 00:25:17.501 Compare (05h): Supported 00:25:17.501 Write Zeroes (08h): Supported LBA-Change 00:25:17.501 Dataset Management (09h): Supported LBA-Change 00:25:17.501 Unknown (0Ch): Supported 00:25:17.501 Unknown (12h): Supported 00:25:17.501 Copy (19h): Supported LBA-Change 00:25:17.501 Unknown (1Dh): Supported LBA-Change 00:25:17.501 00:25:17.501 Error Log 00:25:17.501 ========= 00:25:17.501 00:25:17.501 Arbitration 00:25:17.501 =========== 00:25:17.501 Arbitration Burst: no limit 00:25:17.501 00:25:17.501 Power Management 00:25:17.501 ================ 00:25:17.501 Number of Power States: 1 00:25:17.501 Current Power State: Power State #0 00:25:17.501 Power State #0: 00:25:17.501 Max Power: 25.00 W 00:25:17.501 Non-Operational State: Operational 00:25:17.501 Entry Latency: 16 microseconds 00:25:17.501 Exit Latency: 4 microseconds 00:25:17.501 Relative Read Throughput: 0 00:25:17.501 Relative Read Latency: 0 00:25:17.501 Relative Write Throughput: 0 00:25:17.501 Relative Write Latency: 0 00:25:17.501 Idle Power: Not Reported 00:25:17.501 Active Power: Not Reported 00:25:17.501 Non-Operational Permissive Mode: Not Supported 00:25:17.501 00:25:17.501 Health Information 00:25:17.501 ================== 00:25:17.502 Critical Warnings: 00:25:17.502 Available Spare Space: OK 00:25:17.502 Temperature: OK 00:25:17.502 Device Reliability: OK 00:25:17.502 Read Only: No 00:25:17.502 Volatile Memory Backup: OK 00:25:17.502 Current Temperature: 323 Kelvin (50 Celsius) 00:25:17.502 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:17.502 Available Spare: 0% 00:25:17.502 Available Spare Threshold: 0% 00:25:17.502 Life Percentage Used: 0% 00:25:17.502 Data Units Read: 1036 00:25:17.502 Data Units Written: 903 00:25:17.502 Host Read Commands: 49919 00:25:17.502 Host Write Commands: 48704 00:25:17.502 Controller Busy Time: 0 minutes 00:25:17.502 Power Cycles: 0 00:25:17.502 Power On Hours: 0 hours 00:25:17.502 Unsafe Shutdowns: 0 00:25:17.502 Unrecoverable Media Errors: 0 00:25:17.502 Lifetime Error Log Entries: 0 00:25:17.502 Warning Temperature Time: 0 minutes 00:25:17.502 Critical Temperature Time: 0 minutes 00:25:17.502 00:25:17.502 Number of Queues 00:25:17.502 ================ 00:25:17.502 Number of I/O Submission Queues: 64 00:25:17.502 Number of I/O Completion Queues: 64 00:25:17.502 00:25:17.502 ZNS Specific Controller Data 00:25:17.502 ============================ 00:25:17.502 Zone Append Size Limit: 0 00:25:17.502 00:25:17.502 00:25:17.502 Active Namespaces 00:25:17.502 ================= 00:25:17.502 Namespace ID:1 00:25:17.502 Error Recovery Timeout: Unlimited 00:25:17.502 Command Set Identifier: NVM (00h) 00:25:17.502 Deallocate: Supported 00:25:17.502 Deallocated/Unwritten Error: Supported 00:25:17.502 Deallocated Read Value: All 0x00 00:25:17.502 Deallocate in Write Zeroes: Not Supported 00:25:17.502 Deallocated Guard Field: 0xFFFF 00:25:17.502 Flush: Supported 00:25:17.502 Reservation: Not Supported 00:25:17.502 Namespace Sharing Capabilities: Private 00:25:17.502 Size (in LBAs): 1310720 (5GiB) 00:25:17.502 Capacity (in LBAs): 1310720 (5GiB) 00:25:17.502 Utilization (in LBAs): 1310720 (5GiB) 00:25:17.502 Thin Provisioning: Not Supported 00:25:17.502 Per-NS Atomic Units: No 00:25:17.502 Maximum Single Source Range Length: 128 00:25:17.502 Maximum Copy Length: 128 00:25:17.502 Maximum Source Range Count: 128 00:25:17.502 NGUID/EUI64 Never Reused: No 00:25:17.502 Namespace Write Protected: No 00:25:17.502 Number of LBA Formats: 8 00:25:17.502 Current LBA Format: LBA Format #04 00:25:17.502 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.502 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:17.502 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:17.502 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:17.502 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:17.502 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:17.502 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:17.502 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:17.502 00:25:17.502 NVM Specific Namespace Data 00:25:17.502 =========================== 00:25:17.502 Logical Block Storage Tag Mask: 0 00:25:17.502 Protection Information Capabilities: 00:25:17.502 16b Guard Protection Information Storage Tag Support: No 00:25:17.502 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:17.502 Storage Tag Check Read Support: No 00:25:17.502 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.502 13:45:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:25:17.502 13:45:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:25:17.762 ===================================================== 00:25:17.762 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:25:17.762 ===================================================== 00:25:17.762 Controller Capabilities/Features 00:25:17.762 ================================ 00:25:17.762 Vendor ID: 1b36 00:25:17.762 Subsystem Vendor ID: 1af4 00:25:17.762 Serial Number: 12342 00:25:17.762 Model Number: QEMU NVMe Ctrl 00:25:17.762 Firmware Version: 8.0.0 00:25:17.762 Recommended Arb Burst: 6 00:25:17.762 IEEE OUI Identifier: 00 54 52 00:25:17.762 Multi-path I/O 00:25:17.762 May have multiple subsystem ports: No 00:25:17.762 May have multiple controllers: No 00:25:17.762 Associated with SR-IOV VF: No 00:25:17.762 Max Data Transfer Size: 524288 00:25:17.762 Max Number of Namespaces: 256 00:25:17.762 Max Number of I/O Queues: 64 00:25:17.762 NVMe Specification Version (VS): 1.4 00:25:17.762 NVMe Specification Version (Identify): 1.4 00:25:17.762 Maximum Queue Entries: 2048 00:25:17.762 Contiguous Queues Required: Yes 00:25:17.762 Arbitration Mechanisms Supported 00:25:17.762 Weighted Round Robin: Not Supported 00:25:17.762 Vendor Specific: Not Supported 00:25:17.762 Reset Timeout: 7500 ms 00:25:17.762 Doorbell Stride: 4 bytes 00:25:17.762 NVM Subsystem Reset: Not Supported 00:25:17.762 Command Sets Supported 00:25:17.762 NVM Command Set: Supported 00:25:17.762 Boot Partition: Not Supported 00:25:17.762 Memory Page Size Minimum: 4096 bytes 00:25:17.762 Memory Page Size Maximum: 65536 bytes 00:25:17.762 Persistent Memory Region: Not Supported 00:25:17.762 Optional Asynchronous Events Supported 00:25:17.762 Namespace Attribute Notices: Supported 00:25:17.762 Firmware Activation Notices: Not Supported 00:25:17.762 ANA Change Notices: Not Supported 00:25:17.762 PLE Aggregate Log Change Notices: Not Supported 00:25:17.762 LBA Status Info Alert Notices: Not Supported 00:25:17.762 EGE Aggregate Log Change Notices: Not Supported 00:25:17.762 Normal NVM Subsystem Shutdown event: Not Supported 00:25:17.762 Zone Descriptor Change Notices: Not Supported 00:25:17.762 Discovery Log Change Notices: Not Supported 00:25:17.762 Controller Attributes 00:25:17.762 128-bit Host Identifier: Not Supported 00:25:17.762 Non-Operational Permissive Mode: Not Supported 00:25:17.762 NVM Sets: Not Supported 00:25:17.762 Read Recovery Levels: Not Supported 00:25:17.762 Endurance Groups: Not Supported 00:25:17.762 Predictable Latency Mode: Not Supported 00:25:17.762 Traffic Based Keep ALive: Not Supported 00:25:17.762 Namespace Granularity: Not Supported 00:25:17.762 SQ Associations: Not Supported 00:25:17.762 UUID List: Not Supported 00:25:17.762 Multi-Domain Subsystem: Not Supported 00:25:17.762 Fixed Capacity Management: Not Supported 00:25:17.762 Variable Capacity Management: Not Supported 00:25:17.762 Delete Endurance Group: Not Supported 00:25:17.762 Delete NVM Set: Not Supported 00:25:17.762 Extended LBA Formats Supported: Supported 00:25:17.762 Flexible Data Placement Supported: Not Supported 00:25:17.762 00:25:17.762 Controller Memory Buffer Support 00:25:17.762 ================================ 00:25:17.762 Supported: No 00:25:17.762 00:25:17.762 Persistent Memory Region Support 00:25:17.762 ================================ 00:25:17.762 Supported: No 00:25:17.762 00:25:17.762 Admin Command Set Attributes 00:25:17.762 ============================ 00:25:17.762 Security Send/Receive: Not Supported 00:25:17.762 Format NVM: Supported 00:25:17.762 Firmware Activate/Download: Not Supported 00:25:17.762 Namespace Management: Supported 00:25:17.762 Device Self-Test: Not Supported 00:25:17.762 Directives: Supported 00:25:17.762 NVMe-MI: Not Supported 00:25:17.762 Virtualization Management: Not Supported 00:25:17.762 Doorbell Buffer Config: Supported 00:25:17.762 Get LBA Status Capability: Not Supported 00:25:17.762 Command & Feature Lockdown Capability: Not Supported 00:25:17.762 Abort Command Limit: 4 00:25:17.762 Async Event Request Limit: 4 00:25:17.762 Number of Firmware Slots: N/A 00:25:17.762 Firmware Slot 1 Read-Only: N/A 00:25:17.762 Firmware Activation Without Reset: N/A 00:25:17.762 Multiple Update Detection Support: N/A 00:25:17.762 Firmware Update Granularity: No Information Provided 00:25:17.762 Per-Namespace SMART Log: Yes 00:25:17.762 Asymmetric Namespace Access Log Page: Not Supported 00:25:17.762 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:25:17.762 Command Effects Log Page: Supported 00:25:17.762 Get Log Page Extended Data: Supported 00:25:17.762 Telemetry Log Pages: Not Supported 00:25:17.762 Persistent Event Log Pages: Not Supported 00:25:17.762 Supported Log Pages Log Page: May Support 00:25:17.762 Commands Supported & Effects Log Page: Not Supported 00:25:17.762 Feature Identifiers & Effects Log Page:May Support 00:25:17.762 NVMe-MI Commands & Effects Log Page: May Support 00:25:17.762 Data Area 4 for Telemetry Log: Not Supported 00:25:17.762 Error Log Page Entries Supported: 1 00:25:17.762 Keep Alive: Not Supported 00:25:17.762 00:25:17.762 NVM Command Set Attributes 00:25:17.762 ========================== 00:25:17.762 Submission Queue Entry Size 00:25:17.762 Max: 64 00:25:17.762 Min: 64 00:25:17.762 Completion Queue Entry Size 00:25:17.762 Max: 16 00:25:17.762 Min: 16 00:25:17.762 Number of Namespaces: 256 00:25:17.762 Compare Command: Supported 00:25:17.762 Write Uncorrectable Command: Not Supported 00:25:17.762 Dataset Management Command: Supported 00:25:17.762 Write Zeroes Command: Supported 00:25:17.762 Set Features Save Field: Supported 00:25:17.762 Reservations: Not Supported 00:25:17.762 Timestamp: Supported 00:25:17.762 Copy: Supported 00:25:17.762 Volatile Write Cache: Present 00:25:17.762 Atomic Write Unit (Normal): 1 00:25:17.762 Atomic Write Unit (PFail): 1 00:25:17.762 Atomic Compare & Write Unit: 1 00:25:17.762 Fused Compare & Write: Not Supported 00:25:17.762 Scatter-Gather List 00:25:17.762 SGL Command Set: Supported 00:25:17.762 SGL Keyed: Not Supported 00:25:17.762 SGL Bit Bucket Descriptor: Not Supported 00:25:17.762 SGL Metadata Pointer: Not Supported 00:25:17.762 Oversized SGL: Not Supported 00:25:17.762 SGL Metadata Address: Not Supported 00:25:17.762 SGL Offset: Not Supported 00:25:17.762 Transport SGL Data Block: Not Supported 00:25:17.762 Replay Protected Memory Block: Not Supported 00:25:17.762 00:25:17.762 Firmware Slot Information 00:25:17.762 ========================= 00:25:17.762 Active slot: 1 00:25:17.762 Slot 1 Firmware Revision: 1.0 00:25:17.762 00:25:17.762 00:25:17.762 Commands Supported and Effects 00:25:17.762 ============================== 00:25:17.762 Admin Commands 00:25:17.762 -------------- 00:25:17.762 Delete I/O Submission Queue (00h): Supported 00:25:17.762 Create I/O Submission Queue (01h): Supported 00:25:17.762 Get Log Page (02h): Supported 00:25:17.762 Delete I/O Completion Queue (04h): Supported 00:25:17.762 Create I/O Completion Queue (05h): Supported 00:25:17.762 Identify (06h): Supported 00:25:17.763 Abort (08h): Supported 00:25:17.763 Set Features (09h): Supported 00:25:17.763 Get Features (0Ah): Supported 00:25:17.763 Asynchronous Event Request (0Ch): Supported 00:25:17.763 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:17.763 Directive Send (19h): Supported 00:25:17.763 Directive Receive (1Ah): Supported 00:25:17.763 Virtualization Management (1Ch): Supported 00:25:17.763 Doorbell Buffer Config (7Ch): Supported 00:25:17.763 Format NVM (80h): Supported LBA-Change 00:25:17.763 I/O Commands 00:25:17.763 ------------ 00:25:17.763 Flush (00h): Supported LBA-Change 00:25:17.763 Write (01h): Supported LBA-Change 00:25:17.763 Read (02h): Supported 00:25:17.763 Compare (05h): Supported 00:25:17.763 Write Zeroes (08h): Supported LBA-Change 00:25:17.763 Dataset Management (09h): Supported LBA-Change 00:25:17.763 Unknown (0Ch): Supported 00:25:17.763 Unknown (12h): Supported 00:25:17.763 Copy (19h): Supported LBA-Change 00:25:17.763 Unknown (1Dh): Supported LBA-Change 00:25:17.763 00:25:17.763 Error Log 00:25:17.763 ========= 00:25:17.763 00:25:17.763 Arbitration 00:25:17.763 =========== 00:25:17.763 Arbitration Burst: no limit 00:25:17.763 00:25:17.763 Power Management 00:25:17.763 ================ 00:25:17.763 Number of Power States: 1 00:25:17.763 Current Power State: Power State #0 00:25:17.763 Power State #0: 00:25:17.763 Max Power: 25.00 W 00:25:17.763 Non-Operational State: Operational 00:25:17.763 Entry Latency: 16 microseconds 00:25:17.763 Exit Latency: 4 microseconds 00:25:17.763 Relative Read Throughput: 0 00:25:17.763 Relative Read Latency: 0 00:25:17.763 Relative Write Throughput: 0 00:25:17.763 Relative Write Latency: 0 00:25:17.763 Idle Power: Not Reported 00:25:17.763 Active Power: Not Reported 00:25:17.763 Non-Operational Permissive Mode: Not Supported 00:25:17.763 00:25:17.763 Health Information 00:25:17.763 ================== 00:25:17.763 Critical Warnings: 00:25:17.763 Available Spare Space: OK 00:25:17.763 Temperature: OK 00:25:17.763 Device Reliability: OK 00:25:17.763 Read Only: No 00:25:17.763 Volatile Memory Backup: OK 00:25:17.763 Current Temperature: 323 Kelvin (50 Celsius) 00:25:17.763 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:17.763 Available Spare: 0% 00:25:17.763 Available Spare Threshold: 0% 00:25:17.763 Life Percentage Used: 0% 00:25:17.763 Data Units Read: 2219 00:25:17.763 Data Units Written: 2006 00:25:17.763 Host Read Commands: 102777 00:25:17.763 Host Write Commands: 101046 00:25:17.763 Controller Busy Time: 0 minutes 00:25:17.763 Power Cycles: 0 00:25:17.763 Power On Hours: 0 hours 00:25:17.763 Unsafe Shutdowns: 0 00:25:17.763 Unrecoverable Media Errors: 0 00:25:17.763 Lifetime Error Log Entries: 0 00:25:17.763 Warning Temperature Time: 0 minutes 00:25:17.763 Critical Temperature Time: 0 minutes 00:25:17.763 00:25:17.763 Number of Queues 00:25:17.763 ================ 00:25:17.763 Number of I/O Submission Queues: 64 00:25:17.763 Number of I/O Completion Queues: 64 00:25:17.763 00:25:17.763 ZNS Specific Controller Data 00:25:17.763 ============================ 00:25:17.763 Zone Append Size Limit: 0 00:25:17.763 00:25:17.763 00:25:17.763 Active Namespaces 00:25:17.763 ================= 00:25:17.763 Namespace ID:1 00:25:17.763 Error Recovery Timeout: Unlimited 00:25:17.763 Command Set Identifier: NVM (00h) 00:25:17.763 Deallocate: Supported 00:25:17.763 Deallocated/Unwritten Error: Supported 00:25:17.763 Deallocated Read Value: All 0x00 00:25:17.763 Deallocate in Write Zeroes: Not Supported 00:25:17.763 Deallocated Guard Field: 0xFFFF 00:25:17.763 Flush: Supported 00:25:17.763 Reservation: Not Supported 00:25:17.763 Namespace Sharing Capabilities: Private 00:25:17.763 Size (in LBAs): 1048576 (4GiB) 00:25:17.763 Capacity (in LBAs): 1048576 (4GiB) 00:25:17.763 Utilization (in LBAs): 1048576 (4GiB) 00:25:17.763 Thin Provisioning: Not Supported 00:25:17.763 Per-NS Atomic Units: No 00:25:17.763 Maximum Single Source Range Length: 128 00:25:17.763 Maximum Copy Length: 128 00:25:17.763 Maximum Source Range Count: 128 00:25:17.763 NGUID/EUI64 Never Reused: No 00:25:17.763 Namespace Write Protected: No 00:25:17.763 Number of LBA Formats: 8 00:25:17.763 Current LBA Format: LBA Format #04 00:25:17.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:17.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:17.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:17.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:17.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:17.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:17.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:17.763 00:25:17.763 NVM Specific Namespace Data 00:25:17.763 =========================== 00:25:17.763 Logical Block Storage Tag Mask: 0 00:25:17.763 Protection Information Capabilities: 00:25:17.763 16b Guard Protection Information Storage Tag Support: No 00:25:17.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:17.763 Storage Tag Check Read Support: No 00:25:17.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Namespace ID:2 00:25:17.763 Error Recovery Timeout: Unlimited 00:25:17.763 Command Set Identifier: NVM (00h) 00:25:17.763 Deallocate: Supported 00:25:17.763 Deallocated/Unwritten Error: Supported 00:25:17.763 Deallocated Read Value: All 0x00 00:25:17.763 Deallocate in Write Zeroes: Not Supported 00:25:17.763 Deallocated Guard Field: 0xFFFF 00:25:17.763 Flush: Supported 00:25:17.763 Reservation: Not Supported 00:25:17.763 Namespace Sharing Capabilities: Private 00:25:17.763 Size (in LBAs): 1048576 (4GiB) 00:25:17.763 Capacity (in LBAs): 1048576 (4GiB) 00:25:17.763 Utilization (in LBAs): 1048576 (4GiB) 00:25:17.763 Thin Provisioning: Not Supported 00:25:17.763 Per-NS Atomic Units: No 00:25:17.763 Maximum Single Source Range Length: 128 00:25:17.763 Maximum Copy Length: 128 00:25:17.763 Maximum Source Range Count: 128 00:25:17.763 NGUID/EUI64 Never Reused: No 00:25:17.763 Namespace Write Protected: No 00:25:17.763 Number of LBA Formats: 8 00:25:17.763 Current LBA Format: LBA Format #04 00:25:17.763 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.763 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:17.763 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:17.763 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:17.763 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:17.763 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:17.763 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:17.763 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:17.763 00:25:17.763 NVM Specific Namespace Data 00:25:17.763 =========================== 00:25:17.763 Logical Block Storage Tag Mask: 0 00:25:17.763 Protection Information Capabilities: 00:25:17.763 16b Guard Protection Information Storage Tag Support: No 00:25:17.763 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:17.763 Storage Tag Check Read Support: No 00:25:17.763 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:17.763 Namespace ID:3 00:25:17.763 Error Recovery Timeout: Unlimited 00:25:17.763 Command Set Identifier: NVM (00h) 00:25:17.763 Deallocate: Supported 00:25:17.764 Deallocated/Unwritten Error: Supported 00:25:17.764 Deallocated Read Value: All 0x00 00:25:17.764 Deallocate in Write Zeroes: Not Supported 00:25:17.764 Deallocated Guard Field: 0xFFFF 00:25:17.764 Flush: Supported 00:25:17.764 Reservation: Not Supported 00:25:17.764 Namespace Sharing Capabilities: Private 00:25:17.764 Size (in LBAs): 1048576 (4GiB) 00:25:17.764 Capacity (in LBAs): 1048576 (4GiB) 00:25:17.764 Utilization (in LBAs): 1048576 (4GiB) 00:25:17.764 Thin Provisioning: Not Supported 00:25:17.764 Per-NS Atomic Units: No 00:25:17.764 Maximum Single Source Range Length: 128 00:25:17.764 Maximum Copy Length: 128 00:25:17.764 Maximum Source Range Count: 128 00:25:17.764 NGUID/EUI64 Never Reused: No 00:25:17.764 Namespace Write Protected: No 00:25:17.764 Number of LBA Formats: 8 00:25:17.764 Current LBA Format: LBA Format #04 00:25:17.764 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:17.764 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:17.764 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:17.764 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:17.764 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:17.764 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:17.764 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:17.764 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:17.764 00:25:17.764 NVM Specific Namespace Data 00:25:17.764 =========================== 00:25:17.764 Logical Block Storage Tag Mask: 0 00:25:17.764 Protection Information Capabilities: 00:25:17.764 16b Guard Protection Information Storage Tag Support: No 00:25:17.764 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:18.022 Storage Tag Check Read Support: No 00:25:18.022 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.022 13:45:25 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:25:18.022 13:45:25 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:25:18.282 ===================================================== 00:25:18.282 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:25:18.282 ===================================================== 00:25:18.282 Controller Capabilities/Features 00:25:18.282 ================================ 00:25:18.282 Vendor ID: 1b36 00:25:18.282 Subsystem Vendor ID: 1af4 00:25:18.282 Serial Number: 12343 00:25:18.282 Model Number: QEMU NVMe Ctrl 00:25:18.282 Firmware Version: 8.0.0 00:25:18.282 Recommended Arb Burst: 6 00:25:18.282 IEEE OUI Identifier: 00 54 52 00:25:18.282 Multi-path I/O 00:25:18.282 May have multiple subsystem ports: No 00:25:18.282 May have multiple controllers: Yes 00:25:18.282 Associated with SR-IOV VF: No 00:25:18.282 Max Data Transfer Size: 524288 00:25:18.282 Max Number of Namespaces: 256 00:25:18.282 Max Number of I/O Queues: 64 00:25:18.282 NVMe Specification Version (VS): 1.4 00:25:18.282 NVMe Specification Version (Identify): 1.4 00:25:18.282 Maximum Queue Entries: 2048 00:25:18.282 Contiguous Queues Required: Yes 00:25:18.282 Arbitration Mechanisms Supported 00:25:18.282 Weighted Round Robin: Not Supported 00:25:18.282 Vendor Specific: Not Supported 00:25:18.282 Reset Timeout: 7500 ms 00:25:18.282 Doorbell Stride: 4 bytes 00:25:18.282 NVM Subsystem Reset: Not Supported 00:25:18.282 Command Sets Supported 00:25:18.282 NVM Command Set: Supported 00:25:18.282 Boot Partition: Not Supported 00:25:18.282 Memory Page Size Minimum: 4096 bytes 00:25:18.282 Memory Page Size Maximum: 65536 bytes 00:25:18.282 Persistent Memory Region: Not Supported 00:25:18.282 Optional Asynchronous Events Supported 00:25:18.282 Namespace Attribute Notices: Supported 00:25:18.282 Firmware Activation Notices: Not Supported 00:25:18.282 ANA Change Notices: Not Supported 00:25:18.282 PLE Aggregate Log Change Notices: Not Supported 00:25:18.282 LBA Status Info Alert Notices: Not Supported 00:25:18.282 EGE Aggregate Log Change Notices: Not Supported 00:25:18.282 Normal NVM Subsystem Shutdown event: Not Supported 00:25:18.282 Zone Descriptor Change Notices: Not Supported 00:25:18.282 Discovery Log Change Notices: Not Supported 00:25:18.282 Controller Attributes 00:25:18.282 128-bit Host Identifier: Not Supported 00:25:18.282 Non-Operational Permissive Mode: Not Supported 00:25:18.282 NVM Sets: Not Supported 00:25:18.282 Read Recovery Levels: Not Supported 00:25:18.282 Endurance Groups: Supported 00:25:18.282 Predictable Latency Mode: Not Supported 00:25:18.282 Traffic Based Keep ALive: Not Supported 00:25:18.282 Namespace Granularity: Not Supported 00:25:18.282 SQ Associations: Not Supported 00:25:18.282 UUID List: Not Supported 00:25:18.282 Multi-Domain Subsystem: Not Supported 00:25:18.282 Fixed Capacity Management: Not Supported 00:25:18.282 Variable Capacity Management: Not Supported 00:25:18.282 Delete Endurance Group: Not Supported 00:25:18.282 Delete NVM Set: Not Supported 00:25:18.282 Extended LBA Formats Supported: Supported 00:25:18.282 Flexible Data Placement Supported: Supported 00:25:18.282 00:25:18.282 Controller Memory Buffer Support 00:25:18.282 ================================ 00:25:18.282 Supported: No 00:25:18.282 00:25:18.282 Persistent Memory Region Support 00:25:18.282 ================================ 00:25:18.282 Supported: No 00:25:18.282 00:25:18.282 Admin Command Set Attributes 00:25:18.282 ============================ 00:25:18.282 Security Send/Receive: Not Supported 00:25:18.282 Format NVM: Supported 00:25:18.282 Firmware Activate/Download: Not Supported 00:25:18.282 Namespace Management: Supported 00:25:18.282 Device Self-Test: Not Supported 00:25:18.282 Directives: Supported 00:25:18.282 NVMe-MI: Not Supported 00:25:18.282 Virtualization Management: Not Supported 00:25:18.282 Doorbell Buffer Config: Supported 00:25:18.282 Get LBA Status Capability: Not Supported 00:25:18.282 Command & Feature Lockdown Capability: Not Supported 00:25:18.282 Abort Command Limit: 4 00:25:18.282 Async Event Request Limit: 4 00:25:18.282 Number of Firmware Slots: N/A 00:25:18.282 Firmware Slot 1 Read-Only: N/A 00:25:18.282 Firmware Activation Without Reset: N/A 00:25:18.282 Multiple Update Detection Support: N/A 00:25:18.283 Firmware Update Granularity: No Information Provided 00:25:18.283 Per-Namespace SMART Log: Yes 00:25:18.283 Asymmetric Namespace Access Log Page: Not Supported 00:25:18.283 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:25:18.283 Command Effects Log Page: Supported 00:25:18.283 Get Log Page Extended Data: Supported 00:25:18.283 Telemetry Log Pages: Not Supported 00:25:18.283 Persistent Event Log Pages: Not Supported 00:25:18.283 Supported Log Pages Log Page: May Support 00:25:18.283 Commands Supported & Effects Log Page: Not Supported 00:25:18.283 Feature Identifiers & Effects Log Page:May Support 00:25:18.283 NVMe-MI Commands & Effects Log Page: May Support 00:25:18.283 Data Area 4 for Telemetry Log: Not Supported 00:25:18.283 Error Log Page Entries Supported: 1 00:25:18.283 Keep Alive: Not Supported 00:25:18.283 00:25:18.283 NVM Command Set Attributes 00:25:18.283 ========================== 00:25:18.283 Submission Queue Entry Size 00:25:18.283 Max: 64 00:25:18.283 Min: 64 00:25:18.283 Completion Queue Entry Size 00:25:18.283 Max: 16 00:25:18.283 Min: 16 00:25:18.283 Number of Namespaces: 256 00:25:18.283 Compare Command: Supported 00:25:18.283 Write Uncorrectable Command: Not Supported 00:25:18.283 Dataset Management Command: Supported 00:25:18.283 Write Zeroes Command: Supported 00:25:18.283 Set Features Save Field: Supported 00:25:18.283 Reservations: Not Supported 00:25:18.283 Timestamp: Supported 00:25:18.283 Copy: Supported 00:25:18.283 Volatile Write Cache: Present 00:25:18.283 Atomic Write Unit (Normal): 1 00:25:18.283 Atomic Write Unit (PFail): 1 00:25:18.283 Atomic Compare & Write Unit: 1 00:25:18.283 Fused Compare & Write: Not Supported 00:25:18.283 Scatter-Gather List 00:25:18.283 SGL Command Set: Supported 00:25:18.283 SGL Keyed: Not Supported 00:25:18.283 SGL Bit Bucket Descriptor: Not Supported 00:25:18.283 SGL Metadata Pointer: Not Supported 00:25:18.283 Oversized SGL: Not Supported 00:25:18.283 SGL Metadata Address: Not Supported 00:25:18.283 SGL Offset: Not Supported 00:25:18.283 Transport SGL Data Block: Not Supported 00:25:18.283 Replay Protected Memory Block: Not Supported 00:25:18.283 00:25:18.283 Firmware Slot Information 00:25:18.283 ========================= 00:25:18.283 Active slot: 1 00:25:18.283 Slot 1 Firmware Revision: 1.0 00:25:18.283 00:25:18.283 00:25:18.283 Commands Supported and Effects 00:25:18.283 ============================== 00:25:18.283 Admin Commands 00:25:18.283 -------------- 00:25:18.283 Delete I/O Submission Queue (00h): Supported 00:25:18.283 Create I/O Submission Queue (01h): Supported 00:25:18.283 Get Log Page (02h): Supported 00:25:18.283 Delete I/O Completion Queue (04h): Supported 00:25:18.283 Create I/O Completion Queue (05h): Supported 00:25:18.283 Identify (06h): Supported 00:25:18.283 Abort (08h): Supported 00:25:18.283 Set Features (09h): Supported 00:25:18.283 Get Features (0Ah): Supported 00:25:18.283 Asynchronous Event Request (0Ch): Supported 00:25:18.283 Namespace Attachment (15h): Supported NS-Inventory-Change 00:25:18.283 Directive Send (19h): Supported 00:25:18.283 Directive Receive (1Ah): Supported 00:25:18.283 Virtualization Management (1Ch): Supported 00:25:18.283 Doorbell Buffer Config (7Ch): Supported 00:25:18.283 Format NVM (80h): Supported LBA-Change 00:25:18.283 I/O Commands 00:25:18.283 ------------ 00:25:18.283 Flush (00h): Supported LBA-Change 00:25:18.283 Write (01h): Supported LBA-Change 00:25:18.283 Read (02h): Supported 00:25:18.283 Compare (05h): Supported 00:25:18.283 Write Zeroes (08h): Supported LBA-Change 00:25:18.283 Dataset Management (09h): Supported LBA-Change 00:25:18.283 Unknown (0Ch): Supported 00:25:18.283 Unknown (12h): Supported 00:25:18.283 Copy (19h): Supported LBA-Change 00:25:18.283 Unknown (1Dh): Supported LBA-Change 00:25:18.283 00:25:18.283 Error Log 00:25:18.283 ========= 00:25:18.283 00:25:18.283 Arbitration 00:25:18.283 =========== 00:25:18.283 Arbitration Burst: no limit 00:25:18.283 00:25:18.283 Power Management 00:25:18.283 ================ 00:25:18.283 Number of Power States: 1 00:25:18.283 Current Power State: Power State #0 00:25:18.283 Power State #0: 00:25:18.283 Max Power: 25.00 W 00:25:18.283 Non-Operational State: Operational 00:25:18.283 Entry Latency: 16 microseconds 00:25:18.283 Exit Latency: 4 microseconds 00:25:18.283 Relative Read Throughput: 0 00:25:18.283 Relative Read Latency: 0 00:25:18.283 Relative Write Throughput: 0 00:25:18.283 Relative Write Latency: 0 00:25:18.283 Idle Power: Not Reported 00:25:18.283 Active Power: Not Reported 00:25:18.283 Non-Operational Permissive Mode: Not Supported 00:25:18.283 00:25:18.283 Health Information 00:25:18.283 ================== 00:25:18.283 Critical Warnings: 00:25:18.283 Available Spare Space: OK 00:25:18.283 Temperature: OK 00:25:18.283 Device Reliability: OK 00:25:18.283 Read Only: No 00:25:18.283 Volatile Memory Backup: OK 00:25:18.283 Current Temperature: 323 Kelvin (50 Celsius) 00:25:18.283 Temperature Threshold: 343 Kelvin (70 Celsius) 00:25:18.283 Available Spare: 0% 00:25:18.283 Available Spare Threshold: 0% 00:25:18.283 Life Percentage Used: 0% 00:25:18.283 Data Units Read: 826 00:25:18.283 Data Units Written: 755 00:25:18.283 Host Read Commands: 34967 00:25:18.283 Host Write Commands: 34390 00:25:18.283 Controller Busy Time: 0 minutes 00:25:18.283 Power Cycles: 0 00:25:18.283 Power On Hours: 0 hours 00:25:18.283 Unsafe Shutdowns: 0 00:25:18.283 Unrecoverable Media Errors: 0 00:25:18.283 Lifetime Error Log Entries: 0 00:25:18.283 Warning Temperature Time: 0 minutes 00:25:18.283 Critical Temperature Time: 0 minutes 00:25:18.283 00:25:18.283 Number of Queues 00:25:18.283 ================ 00:25:18.283 Number of I/O Submission Queues: 64 00:25:18.283 Number of I/O Completion Queues: 64 00:25:18.283 00:25:18.283 ZNS Specific Controller Data 00:25:18.283 ============================ 00:25:18.283 Zone Append Size Limit: 0 00:25:18.283 00:25:18.283 00:25:18.283 Active Namespaces 00:25:18.283 ================= 00:25:18.283 Namespace ID:1 00:25:18.283 Error Recovery Timeout: Unlimited 00:25:18.283 Command Set Identifier: NVM (00h) 00:25:18.283 Deallocate: Supported 00:25:18.283 Deallocated/Unwritten Error: Supported 00:25:18.283 Deallocated Read Value: All 0x00 00:25:18.283 Deallocate in Write Zeroes: Not Supported 00:25:18.283 Deallocated Guard Field: 0xFFFF 00:25:18.283 Flush: Supported 00:25:18.283 Reservation: Not Supported 00:25:18.283 Namespace Sharing Capabilities: Multiple Controllers 00:25:18.283 Size (in LBAs): 262144 (1GiB) 00:25:18.283 Capacity (in LBAs): 262144 (1GiB) 00:25:18.283 Utilization (in LBAs): 262144 (1GiB) 00:25:18.283 Thin Provisioning: Not Supported 00:25:18.283 Per-NS Atomic Units: No 00:25:18.283 Maximum Single Source Range Length: 128 00:25:18.283 Maximum Copy Length: 128 00:25:18.283 Maximum Source Range Count: 128 00:25:18.283 NGUID/EUI64 Never Reused: No 00:25:18.283 Namespace Write Protected: No 00:25:18.283 Endurance group ID: 1 00:25:18.283 Number of LBA Formats: 8 00:25:18.283 Current LBA Format: LBA Format #04 00:25:18.283 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:18.283 LBA Format #01: Data Size: 512 Metadata Size: 8 00:25:18.283 LBA Format #02: Data Size: 512 Metadata Size: 16 00:25:18.283 LBA Format #03: Data Size: 512 Metadata Size: 64 00:25:18.283 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:25:18.283 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:25:18.283 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:25:18.283 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:25:18.283 00:25:18.283 Get Feature FDP: 00:25:18.283 ================ 00:25:18.283 Enabled: Yes 00:25:18.283 FDP configuration index: 0 00:25:18.284 00:25:18.284 FDP configurations log page 00:25:18.284 =========================== 00:25:18.284 Number of FDP configurations: 1 00:25:18.284 Version: 0 00:25:18.284 Size: 112 00:25:18.284 FDP Configuration Descriptor: 0 00:25:18.284 Descriptor Size: 96 00:25:18.284 Reclaim Group Identifier format: 2 00:25:18.284 FDP Volatile Write Cache: Not Present 00:25:18.284 FDP Configuration: Valid 00:25:18.284 Vendor Specific Size: 0 00:25:18.284 Number of Reclaim Groups: 2 00:25:18.284 Number of Recalim Unit Handles: 8 00:25:18.284 Max Placement Identifiers: 128 00:25:18.284 Number of Namespaces Suppprted: 256 00:25:18.284 Reclaim unit Nominal Size: 6000000 bytes 00:25:18.284 Estimated Reclaim Unit Time Limit: Not Reported 00:25:18.284 RUH Desc #000: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #001: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #002: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #003: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #004: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #005: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #006: RUH Type: Initially Isolated 00:25:18.284 RUH Desc #007: RUH Type: Initially Isolated 00:25:18.284 00:25:18.284 FDP reclaim unit handle usage log page 00:25:18.284 ====================================== 00:25:18.284 Number of Reclaim Unit Handles: 8 00:25:18.284 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:25:18.284 RUH Usage Desc #001: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #002: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #003: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #004: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #005: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #006: RUH Attributes: Unused 00:25:18.284 RUH Usage Desc #007: RUH Attributes: Unused 00:25:18.284 00:25:18.284 FDP statistics log page 00:25:18.284 ======================= 00:25:18.284 Host bytes with metadata written: 477863936 00:25:18.284 Media bytes with metadata written: 477908992 00:25:18.284 Media bytes erased: 0 00:25:18.284 00:25:18.284 FDP events log page 00:25:18.284 =================== 00:25:18.284 Number of FDP events: 0 00:25:18.284 00:25:18.284 NVM Specific Namespace Data 00:25:18.284 =========================== 00:25:18.284 Logical Block Storage Tag Mask: 0 00:25:18.284 Protection Information Capabilities: 00:25:18.284 16b Guard Protection Information Storage Tag Support: No 00:25:18.284 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:25:18.284 Storage Tag Check Read Support: No 00:25:18.284 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:25:18.284 00:25:18.284 real 0m1.623s 00:25:18.284 user 0m0.578s 00:25:18.284 sys 0m0.809s 00:25:18.284 13:45:25 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:18.284 13:45:25 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.284 ************************************ 00:25:18.284 END TEST nvme_identify 00:25:18.284 ************************************ 00:25:18.284 13:45:25 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:25:18.284 13:45:25 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:18.284 13:45:25 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:18.284 13:45:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:18.284 ************************************ 00:25:18.284 START TEST nvme_perf 00:25:18.284 ************************************ 00:25:18.284 13:45:25 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:25:18.284 13:45:25 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:25:19.727 Initializing NVMe Controllers 00:25:19.727 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:19.727 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:25:19.727 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:25:19.727 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:25:19.727 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:25:19.727 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:25:19.727 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:25:19.727 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:25:19.727 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:25:19.727 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:25:19.727 Initialization complete. Launching workers. 00:25:19.727 ======================================================== 00:25:19.727 Latency(us) 00:25:19.727 Device Information : IOPS MiB/s Average min max 00:25:19.727 PCIE (0000:00:10.0) NSID 1 from core 0: 13518.25 158.42 9494.65 7033.84 41367.55 00:25:19.727 PCIE (0000:00:11.0) NSID 1 from core 0: 13518.25 158.42 9478.71 7138.00 39383.57 00:25:19.727 PCIE (0000:00:13.0) NSID 1 from core 0: 13518.25 158.42 9459.92 7159.48 37548.50 00:25:19.727 PCIE (0000:00:12.0) NSID 1 from core 0: 13518.25 158.42 9440.50 7127.54 35344.58 00:25:19.727 PCIE (0000:00:12.0) NSID 2 from core 0: 13518.25 158.42 9420.39 7120.01 33165.23 00:25:19.727 PCIE (0000:00:12.0) NSID 3 from core 0: 13518.25 158.42 9401.08 7081.54 30821.62 00:25:19.727 ======================================================== 00:25:19.727 Total : 81109.52 950.50 9449.21 7033.84 41367.55 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7297.677us 00:25:19.727 10.00000% : 7612.479us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8413.792us 00:25:19.727 75.00000% : 9501.289us 00:25:19.727 90.00000% : 12477.597us 00:25:19.727 95.00000% : 16140.744us 00:25:19.727 98.00000% : 18430.211us 00:25:19.727 99.00000% : 21406.519us 00:25:19.727 99.50000% : 29992.021us 00:25:19.727 99.90000% : 40981.464us 00:25:19.727 99.99000% : 41439.357us 00:25:19.727 99.99900% : 41439.357us 00:25:19.727 99.99990% : 41439.357us 00:25:19.727 99.99999% : 41439.357us 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7383.532us 00:25:19.727 10.00000% : 7669.715us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8413.792us 00:25:19.727 75.00000% : 9386.816us 00:25:19.727 90.00000% : 12534.833us 00:25:19.727 95.00000% : 16026.271us 00:25:19.727 98.00000% : 17972.318us 00:25:19.727 99.00000% : 22093.359us 00:25:19.727 99.50000% : 28503.867us 00:25:19.727 99.90000% : 38920.943us 00:25:19.727 99.99000% : 39378.837us 00:25:19.727 99.99900% : 39607.783us 00:25:19.727 99.99990% : 39607.783us 00:25:19.727 99.99999% : 39607.783us 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7383.532us 00:25:19.727 10.00000% : 7669.715us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8356.555us 00:25:19.727 75.00000% : 9386.816us 00:25:19.727 90.00000% : 12935.490us 00:25:19.727 95.00000% : 16026.271us 00:25:19.727 98.00000% : 18430.211us 00:25:19.727 99.00000% : 22894.672us 00:25:19.727 99.50000% : 26901.240us 00:25:19.727 99.90000% : 37089.369us 00:25:19.727 99.99000% : 37547.263us 00:25:19.727 99.99900% : 37776.210us 00:25:19.727 99.99990% : 37776.210us 00:25:19.727 99.99999% : 37776.210us 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7383.532us 00:25:19.727 10.00000% : 7669.715us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8356.555us 00:25:19.727 75.00000% : 9386.816us 00:25:19.727 90.00000% : 13393.383us 00:25:19.727 95.00000% : 15568.377us 00:25:19.727 98.00000% : 18544.685us 00:25:19.727 99.00000% : 22436.779us 00:25:19.727 99.50000% : 25298.613us 00:25:19.727 99.90000% : 34799.902us 00:25:19.727 99.99000% : 35486.742us 00:25:19.727 99.99900% : 35486.742us 00:25:19.727 99.99990% : 35486.742us 00:25:19.727 99.99999% : 35486.742us 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7383.532us 00:25:19.727 10.00000% : 7669.715us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8356.555us 00:25:19.727 75.00000% : 9444.052us 00:25:19.727 90.00000% : 13164.437us 00:25:19.727 95.00000% : 15568.377us 00:25:19.727 98.00000% : 19002.578us 00:25:19.727 99.00000% : 21292.045us 00:25:19.727 99.50000% : 23467.039us 00:25:19.727 99.90000% : 32739.382us 00:25:19.727 99.99000% : 33197.275us 00:25:19.727 99.99900% : 33197.275us 00:25:19.727 99.99990% : 33197.275us 00:25:19.727 99.99999% : 33197.275us 00:25:19.727 00:25:19.727 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:25:19.727 ================================================================================= 00:25:19.727 1.00000% : 7383.532us 00:25:19.727 10.00000% : 7669.715us 00:25:19.727 25.00000% : 7955.899us 00:25:19.727 50.00000% : 8356.555us 00:25:19.727 75.00000% : 9444.052us 00:25:19.727 90.00000% : 12649.307us 00:25:19.727 95.00000% : 15797.324us 00:25:19.727 98.00000% : 19345.998us 00:25:19.727 99.00000% : 20376.259us 00:25:19.727 99.50000% : 21864.412us 00:25:19.727 99.90000% : 30449.914us 00:25:19.727 99.99000% : 30907.808us 00:25:19.727 99.99900% : 30907.808us 00:25:19.727 99.99990% : 30907.808us 00:25:19.727 99.99999% : 30907.808us 00:25:19.727 00:25:19.727 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:25:19.727 ============================================================================== 00:25:19.727 Range in us Cumulative IO count 00:25:19.727 7011.493 - 7040.112: 0.0147% ( 2) 00:25:19.727 7040.112 - 7068.730: 0.0295% ( 2) 00:25:19.727 7097.348 - 7125.967: 0.1032% ( 10) 00:25:19.727 7125.967 - 7154.585: 0.1474% ( 6) 00:25:19.727 7154.585 - 7183.203: 0.2064% ( 8) 00:25:19.727 7183.203 - 7211.822: 0.3759% ( 23) 00:25:19.727 7211.822 - 7240.440: 0.5749% ( 27) 00:25:19.727 7240.440 - 7269.059: 0.8918% ( 43) 00:25:19.727 7269.059 - 7297.677: 1.1424% ( 34) 00:25:19.727 7297.677 - 7326.295: 1.5920% ( 61) 00:25:19.727 7326.295 - 7383.532: 2.7860% ( 162) 00:25:19.727 7383.532 - 7440.769: 4.5032% ( 233) 00:25:19.727 7440.769 - 7498.005: 6.3384% ( 249) 00:25:19.727 7498.005 - 7555.242: 8.4242% ( 283) 00:25:19.727 7555.242 - 7612.479: 10.7532% ( 316) 00:25:19.727 7612.479 - 7669.715: 13.1633% ( 327) 00:25:19.727 7669.715 - 7726.952: 15.4260% ( 307) 00:25:19.727 7726.952 - 7784.189: 18.2562% ( 384) 00:25:19.727 7784.189 - 7841.425: 21.0864% ( 384) 00:25:19.728 7841.425 - 7898.662: 23.8723% ( 378) 00:25:19.728 7898.662 - 7955.899: 26.6509% ( 377) 00:25:19.728 7955.899 - 8013.135: 29.6728% ( 410) 00:25:19.728 8013.135 - 8070.372: 32.8641% ( 433) 00:25:19.728 8070.372 - 8127.609: 35.8343% ( 403) 00:25:19.728 8127.609 - 8184.845: 39.1436% ( 449) 00:25:19.728 8184.845 - 8242.082: 42.3423% ( 434) 00:25:19.728 8242.082 - 8299.319: 45.4820% ( 426) 00:25:19.728 8299.319 - 8356.555: 48.5259% ( 413) 00:25:19.728 8356.555 - 8413.792: 51.6215% ( 420) 00:25:19.728 8413.792 - 8471.029: 54.4664% ( 386) 00:25:19.728 8471.029 - 8528.266: 57.0386% ( 349) 00:25:19.728 8528.266 - 8585.502: 59.4192% ( 323) 00:25:19.728 8585.502 - 8642.739: 61.3650% ( 264) 00:25:19.728 8642.739 - 8699.976: 63.1265% ( 239) 00:25:19.728 8699.976 - 8757.212: 64.5710% ( 196) 00:25:19.728 8757.212 - 8814.449: 65.9493% ( 187) 00:25:19.728 8814.449 - 8871.686: 67.0622% ( 151) 00:25:19.728 8871.686 - 8928.922: 68.1751% ( 151) 00:25:19.728 8928.922 - 8986.159: 69.0743% ( 122) 00:25:19.728 8986.159 - 9043.396: 69.9145% ( 114) 00:25:19.728 9043.396 - 9100.632: 70.7695% ( 116) 00:25:19.728 9100.632 - 9157.869: 71.5728% ( 109) 00:25:19.728 9157.869 - 9215.106: 72.3541% ( 106) 00:25:19.728 9215.106 - 9272.342: 73.0469% ( 94) 00:25:19.728 9272.342 - 9329.579: 73.7249% ( 92) 00:25:19.728 9329.579 - 9386.816: 74.3146% ( 80) 00:25:19.728 9386.816 - 9444.052: 74.8968% ( 79) 00:25:19.728 9444.052 - 9501.289: 75.3611% ( 63) 00:25:19.728 9501.289 - 9558.526: 75.8623% ( 68) 00:25:19.728 9558.526 - 9615.762: 76.3267% ( 63) 00:25:19.728 9615.762 - 9672.999: 76.6804% ( 48) 00:25:19.728 9672.999 - 9730.236: 77.1595% ( 65) 00:25:19.728 9730.236 - 9787.472: 77.5575% ( 54) 00:25:19.728 9787.472 - 9844.709: 77.8965% ( 46) 00:25:19.728 9844.709 - 9901.946: 78.2724% ( 51) 00:25:19.728 9901.946 - 9959.183: 78.6041% ( 45) 00:25:19.728 9959.183 - 10016.419: 78.9726% ( 50) 00:25:19.728 10016.419 - 10073.656: 79.2969% ( 44) 00:25:19.728 10073.656 - 10130.893: 79.6285% ( 45) 00:25:19.728 10130.893 - 10188.129: 80.0118% ( 52) 00:25:19.728 10188.129 - 10245.366: 80.3213% ( 42) 00:25:19.728 10245.366 - 10302.603: 80.6456% ( 44) 00:25:19.728 10302.603 - 10359.839: 80.9994% ( 48) 00:25:19.728 10359.839 - 10417.076: 81.2942% ( 40) 00:25:19.728 10417.076 - 10474.313: 81.6185% ( 44) 00:25:19.728 10474.313 - 10531.549: 81.9281% ( 42) 00:25:19.728 10531.549 - 10588.786: 82.1934% ( 36) 00:25:19.728 10588.786 - 10646.023: 82.5029% ( 42) 00:25:19.728 10646.023 - 10703.259: 82.8199% ( 43) 00:25:19.728 10703.259 - 10760.496: 83.2252% ( 55) 00:25:19.728 10760.496 - 10817.733: 83.6306% ( 55) 00:25:19.728 10817.733 - 10874.969: 83.9991% ( 50) 00:25:19.728 10874.969 - 10932.206: 84.3455% ( 47) 00:25:19.728 10932.206 - 10989.443: 84.6256% ( 38) 00:25:19.728 10989.443 - 11046.679: 84.8835% ( 35) 00:25:19.728 11046.679 - 11103.916: 85.1636% ( 38) 00:25:19.728 11103.916 - 11161.153: 85.3921% ( 31) 00:25:19.728 11161.153 - 11218.390: 85.6648% ( 37) 00:25:19.728 11218.390 - 11275.626: 85.9449% ( 38) 00:25:19.728 11275.626 - 11332.863: 86.1660% ( 30) 00:25:19.728 11332.863 - 11390.100: 86.4682% ( 41) 00:25:19.728 11390.100 - 11447.336: 86.7409% ( 37) 00:25:19.728 11447.336 - 11504.573: 87.0136% ( 37) 00:25:19.728 11504.573 - 11561.810: 87.2568% ( 33) 00:25:19.728 11561.810 - 11619.046: 87.5147% ( 35) 00:25:19.728 11619.046 - 11676.283: 87.7653% ( 34) 00:25:19.728 11676.283 - 11733.520: 88.0085% ( 33) 00:25:19.728 11733.520 - 11790.756: 88.2812% ( 37) 00:25:19.728 11790.756 - 11847.993: 88.4876% ( 28) 00:25:19.728 11847.993 - 11905.230: 88.6719% ( 25) 00:25:19.728 11905.230 - 11962.466: 88.8488% ( 24) 00:25:19.728 11962.466 - 12019.703: 88.9519% ( 14) 00:25:19.728 12019.703 - 12076.940: 89.0920% ( 19) 00:25:19.728 12076.940 - 12134.176: 89.2468% ( 21) 00:25:19.728 12134.176 - 12191.413: 89.4015% ( 21) 00:25:19.728 12191.413 - 12248.650: 89.5047% ( 14) 00:25:19.728 12248.650 - 12305.886: 89.6448% ( 19) 00:25:19.728 12305.886 - 12363.123: 89.7627% ( 16) 00:25:19.728 12363.123 - 12420.360: 89.8953% ( 18) 00:25:19.728 12420.360 - 12477.597: 90.0133% ( 16) 00:25:19.728 12477.597 - 12534.833: 90.1165% ( 14) 00:25:19.728 12534.833 - 12592.070: 90.2344% ( 16) 00:25:19.728 12592.070 - 12649.307: 90.3449% ( 15) 00:25:19.728 12649.307 - 12706.543: 90.4923% ( 20) 00:25:19.728 12706.543 - 12763.780: 90.5955% ( 14) 00:25:19.728 12763.780 - 12821.017: 90.7134% ( 16) 00:25:19.728 12821.017 - 12878.253: 90.8682% ( 21) 00:25:19.728 12878.253 - 12935.490: 90.9640% ( 13) 00:25:19.728 12935.490 - 12992.727: 91.0967% ( 18) 00:25:19.728 12992.727 - 13049.963: 91.1851% ( 12) 00:25:19.728 13049.963 - 13107.200: 91.2957% ( 15) 00:25:19.728 13107.200 - 13164.437: 91.3694% ( 10) 00:25:19.728 13164.437 - 13221.673: 91.4357% ( 9) 00:25:19.728 13221.673 - 13278.910: 91.5242% ( 12) 00:25:19.728 13278.910 - 13336.147: 91.5758% ( 7) 00:25:19.728 13336.147 - 13393.383: 91.6495% ( 10) 00:25:19.728 13393.383 - 13450.620: 91.7084% ( 8) 00:25:19.728 13450.620 - 13507.857: 91.7748% ( 9) 00:25:19.728 13507.857 - 13565.093: 91.8632% ( 12) 00:25:19.728 13565.093 - 13622.330: 91.9222% ( 8) 00:25:19.728 13622.330 - 13679.567: 92.0254% ( 14) 00:25:19.728 13679.567 - 13736.803: 92.0622% ( 5) 00:25:19.728 13736.803 - 13794.040: 92.1506% ( 12) 00:25:19.728 13794.040 - 13851.277: 92.2096% ( 8) 00:25:19.728 13851.277 - 13908.514: 92.2538% ( 6) 00:25:19.728 13908.514 - 13965.750: 92.3054% ( 7) 00:25:19.728 13965.750 - 14022.987: 92.3423% ( 5) 00:25:19.728 14022.987 - 14080.224: 92.3718% ( 4) 00:25:19.728 14080.224 - 14137.460: 92.4086% ( 5) 00:25:19.728 14137.460 - 14194.697: 92.4381% ( 4) 00:25:19.728 14194.697 - 14251.934: 92.4676% ( 4) 00:25:19.728 14251.934 - 14309.170: 92.5044% ( 5) 00:25:19.728 14309.170 - 14366.407: 92.5192% ( 2) 00:25:19.728 14366.407 - 14423.644: 92.5339% ( 2) 00:25:19.728 14423.644 - 14480.880: 92.5560% ( 3) 00:25:19.728 14480.880 - 14538.117: 92.5781% ( 3) 00:25:19.728 14538.117 - 14595.354: 92.6150% ( 5) 00:25:19.728 14595.354 - 14652.590: 92.6518% ( 5) 00:25:19.728 14652.590 - 14767.064: 92.7771% ( 17) 00:25:19.728 14767.064 - 14881.537: 92.9098% ( 18) 00:25:19.728 14881.537 - 14996.010: 93.0719% ( 22) 00:25:19.728 14996.010 - 15110.484: 93.2488% ( 24) 00:25:19.728 15110.484 - 15224.957: 93.4478% ( 27) 00:25:19.728 15224.957 - 15339.431: 93.6468% ( 27) 00:25:19.728 15339.431 - 15453.904: 93.8458% ( 27) 00:25:19.728 15453.904 - 15568.377: 94.0448% ( 27) 00:25:19.728 15568.377 - 15682.851: 94.2512% ( 28) 00:25:19.728 15682.851 - 15797.324: 94.4870% ( 32) 00:25:19.728 15797.324 - 15911.797: 94.7081% ( 30) 00:25:19.728 15911.797 - 16026.271: 94.9735% ( 36) 00:25:19.728 16026.271 - 16140.744: 95.2830% ( 42) 00:25:19.728 16140.744 - 16255.217: 95.5336% ( 34) 00:25:19.728 16255.217 - 16369.691: 95.7547% ( 30) 00:25:19.728 16369.691 - 16484.164: 95.9316% ( 24) 00:25:19.728 16484.164 - 16598.638: 96.1159% ( 25) 00:25:19.728 16598.638 - 16713.111: 96.3001% ( 25) 00:25:19.728 16713.111 - 16827.584: 96.4917% ( 26) 00:25:19.728 16827.584 - 16942.058: 96.7055% ( 29) 00:25:19.728 16942.058 - 17056.531: 96.8897% ( 25) 00:25:19.728 17056.531 - 17171.004: 97.0519% ( 22) 00:25:19.728 17171.004 - 17285.478: 97.2214% ( 23) 00:25:19.728 17285.478 - 17399.951: 97.3320% ( 15) 00:25:19.728 17399.951 - 17514.424: 97.4425% ( 15) 00:25:19.728 17514.424 - 17628.898: 97.5236% ( 11) 00:25:19.728 17628.898 - 17743.371: 97.5899% ( 9) 00:25:19.728 17743.371 - 17857.845: 97.6415% ( 7) 00:25:19.728 17857.845 - 17972.318: 97.7152% ( 10) 00:25:19.728 17972.318 - 18086.791: 97.7889% ( 10) 00:25:19.728 18086.791 - 18201.265: 97.8626% ( 10) 00:25:19.728 18201.265 - 18315.738: 97.9290% ( 9) 00:25:19.728 18315.738 - 18430.211: 98.0027% ( 10) 00:25:19.728 18430.211 - 18544.685: 98.0837% ( 11) 00:25:19.728 18544.685 - 18659.158: 98.1501% ( 9) 00:25:19.728 18659.158 - 18773.631: 98.2385% ( 12) 00:25:19.728 18773.631 - 18888.105: 98.2827% ( 6) 00:25:19.728 18888.105 - 19002.578: 98.2975% ( 2) 00:25:19.728 19002.578 - 19117.052: 98.3122% ( 2) 00:25:19.728 19117.052 - 19231.525: 98.3343% ( 3) 00:25:19.728 19231.525 - 19345.998: 98.3491% ( 2) 00:25:19.728 19345.998 - 19460.472: 98.3638% ( 2) 00:25:19.728 19460.472 - 19574.945: 98.3933% ( 4) 00:25:19.728 19574.945 - 19689.418: 98.4522% ( 8) 00:25:19.728 19689.418 - 19803.892: 98.4817% ( 4) 00:25:19.728 19803.892 - 19918.365: 98.5333% ( 7) 00:25:19.728 19918.365 - 20032.838: 98.5849% ( 7) 00:25:19.728 20032.838 - 20147.312: 98.6144% ( 4) 00:25:19.728 20147.312 - 20261.785: 98.6660% ( 7) 00:25:19.728 20261.785 - 20376.259: 98.7102% ( 6) 00:25:19.728 20376.259 - 20490.732: 98.7544% ( 6) 00:25:19.728 20490.732 - 20605.205: 98.7913% ( 5) 00:25:19.728 20605.205 - 20719.679: 98.8502% ( 8) 00:25:19.728 20719.679 - 20834.152: 98.8723% ( 3) 00:25:19.728 20834.152 - 20948.625: 98.9092% ( 5) 00:25:19.728 20948.625 - 21063.099: 98.9387% ( 4) 00:25:19.728 21063.099 - 21177.572: 98.9608% ( 3) 00:25:19.728 21177.572 - 21292.045: 98.9903% ( 4) 00:25:19.728 21292.045 - 21406.519: 99.0124% ( 3) 00:25:19.728 21406.519 - 21520.992: 99.0419% ( 4) 00:25:19.728 21520.992 - 21635.466: 99.0566% ( 2) 00:25:19.728 26557.820 - 26672.293: 99.0640% ( 1) 00:25:19.728 26672.293 - 26786.767: 99.0713% ( 1) 00:25:19.728 26786.767 - 26901.240: 99.0861% ( 2) 00:25:19.728 26901.240 - 27015.714: 99.0935% ( 1) 00:25:19.729 27015.714 - 27130.187: 99.1156% ( 3) 00:25:19.729 27130.187 - 27244.660: 99.1303% ( 2) 00:25:19.729 27244.660 - 27359.134: 99.1450% ( 2) 00:25:19.729 27359.134 - 27473.607: 99.1598% ( 2) 00:25:19.729 27473.607 - 27588.080: 99.1745% ( 2) 00:25:19.729 27588.080 - 27702.554: 99.1966% ( 3) 00:25:19.729 27702.554 - 27817.027: 99.2114% ( 2) 00:25:19.729 27817.027 - 27931.500: 99.2261% ( 2) 00:25:19.729 27931.500 - 28045.974: 99.2409% ( 2) 00:25:19.729 28045.974 - 28160.447: 99.2556% ( 2) 00:25:19.729 28160.447 - 28274.921: 99.2703% ( 2) 00:25:19.729 28274.921 - 28389.394: 99.2851% ( 2) 00:25:19.729 28389.394 - 28503.867: 99.2998% ( 2) 00:25:19.729 28503.867 - 28618.341: 99.3146% ( 2) 00:25:19.729 28618.341 - 28732.814: 99.3293% ( 2) 00:25:19.729 28732.814 - 28847.287: 99.3514% ( 3) 00:25:19.729 28847.287 - 28961.761: 99.3662% ( 2) 00:25:19.729 28961.761 - 29076.234: 99.3735% ( 1) 00:25:19.729 29076.234 - 29190.707: 99.4030% ( 4) 00:25:19.729 29190.707 - 29305.181: 99.4104% ( 1) 00:25:19.729 29305.181 - 29534.128: 99.4472% ( 5) 00:25:19.729 29534.128 - 29763.074: 99.4767% ( 4) 00:25:19.729 29763.074 - 29992.021: 99.5062% ( 4) 00:25:19.729 29992.021 - 30220.968: 99.5283% ( 3) 00:25:19.729 38234.103 - 38463.050: 99.5357% ( 1) 00:25:19.729 38463.050 - 38691.997: 99.5652% ( 4) 00:25:19.729 38691.997 - 38920.943: 99.5946% ( 4) 00:25:19.729 38920.943 - 39149.890: 99.6315% ( 5) 00:25:19.729 39149.890 - 39378.837: 99.6683% ( 5) 00:25:19.729 39378.837 - 39607.783: 99.7052% ( 5) 00:25:19.729 39607.783 - 39836.730: 99.7420% ( 5) 00:25:19.729 39836.730 - 40065.677: 99.7789% ( 5) 00:25:19.729 40065.677 - 40294.624: 99.8157% ( 5) 00:25:19.729 40294.624 - 40523.570: 99.8452% ( 4) 00:25:19.729 40523.570 - 40752.517: 99.8968% ( 7) 00:25:19.729 40752.517 - 40981.464: 99.9337% ( 5) 00:25:19.729 40981.464 - 41210.410: 99.9779% ( 6) 00:25:19.729 41210.410 - 41439.357: 100.0000% ( 3) 00:25:19.729 00:25:19.729 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:25:19.729 ============================================================================== 00:25:19.729 Range in us Cumulative IO count 00:25:19.729 7125.967 - 7154.585: 0.0295% ( 4) 00:25:19.729 7154.585 - 7183.203: 0.0442% ( 2) 00:25:19.729 7183.203 - 7211.822: 0.0811% ( 5) 00:25:19.729 7211.822 - 7240.440: 0.1548% ( 10) 00:25:19.729 7240.440 - 7269.059: 0.2211% ( 9) 00:25:19.729 7269.059 - 7297.677: 0.3169% ( 13) 00:25:19.729 7297.677 - 7326.295: 0.5159% ( 27) 00:25:19.729 7326.295 - 7383.532: 1.1719% ( 89) 00:25:19.729 7383.532 - 7440.769: 2.2479% ( 146) 00:25:19.729 7440.769 - 7498.005: 3.5746% ( 180) 00:25:19.729 7498.005 - 7555.242: 5.4245% ( 251) 00:25:19.729 7555.242 - 7612.479: 7.7314% ( 313) 00:25:19.729 7612.479 - 7669.715: 10.3552% ( 356) 00:25:19.729 7669.715 - 7726.952: 13.0454% ( 365) 00:25:19.729 7726.952 - 7784.189: 15.8461% ( 380) 00:25:19.729 7784.189 - 7841.425: 18.8827% ( 412) 00:25:19.729 7841.425 - 7898.662: 22.0887% ( 435) 00:25:19.729 7898.662 - 7955.899: 25.4275% ( 453) 00:25:19.729 7955.899 - 8013.135: 28.7957% ( 457) 00:25:19.729 8013.135 - 8070.372: 32.2966% ( 475) 00:25:19.729 8070.372 - 8127.609: 35.8343% ( 480) 00:25:19.729 8127.609 - 8184.845: 39.5268% ( 501) 00:25:19.729 8184.845 - 8242.082: 43.1235% ( 488) 00:25:19.729 8242.082 - 8299.319: 46.5654% ( 467) 00:25:19.729 8299.319 - 8356.555: 49.9705% ( 462) 00:25:19.729 8356.555 - 8413.792: 53.0587% ( 419) 00:25:19.729 8413.792 - 8471.029: 55.9183% ( 388) 00:25:19.729 8471.029 - 8528.266: 58.4095% ( 338) 00:25:19.729 8528.266 - 8585.502: 60.5542% ( 291) 00:25:19.729 8585.502 - 8642.739: 62.4337% ( 255) 00:25:19.729 8642.739 - 8699.976: 64.0772% ( 223) 00:25:19.729 8699.976 - 8757.212: 65.5218% ( 196) 00:25:19.729 8757.212 - 8814.449: 66.8485% ( 180) 00:25:19.729 8814.449 - 8871.686: 68.0793% ( 167) 00:25:19.729 8871.686 - 8928.922: 69.1554% ( 146) 00:25:19.729 8928.922 - 8986.159: 70.1356% ( 133) 00:25:19.729 8986.159 - 9043.396: 70.9758% ( 114) 00:25:19.729 9043.396 - 9100.632: 71.7866% ( 110) 00:25:19.729 9100.632 - 9157.869: 72.5973% ( 110) 00:25:19.729 9157.869 - 9215.106: 73.3859% ( 107) 00:25:19.729 9215.106 - 9272.342: 74.1082% ( 98) 00:25:19.729 9272.342 - 9329.579: 74.7126% ( 82) 00:25:19.729 9329.579 - 9386.816: 75.2285% ( 70) 00:25:19.729 9386.816 - 9444.052: 75.6707% ( 60) 00:25:19.729 9444.052 - 9501.289: 76.0982% ( 58) 00:25:19.729 9501.289 - 9558.526: 76.5109% ( 56) 00:25:19.729 9558.526 - 9615.762: 76.9163% ( 55) 00:25:19.729 9615.762 - 9672.999: 77.3143% ( 54) 00:25:19.729 9672.999 - 9730.236: 77.6754% ( 49) 00:25:19.729 9730.236 - 9787.472: 78.0218% ( 47) 00:25:19.729 9787.472 - 9844.709: 78.3903% ( 50) 00:25:19.729 9844.709 - 9901.946: 78.7662% ( 51) 00:25:19.729 9901.946 - 9959.183: 79.1495% ( 52) 00:25:19.729 9959.183 - 10016.419: 79.5032% ( 48) 00:25:19.729 10016.419 - 10073.656: 79.9160% ( 56) 00:25:19.729 10073.656 - 10130.893: 80.2698% ( 48) 00:25:19.729 10130.893 - 10188.129: 80.6456% ( 51) 00:25:19.729 10188.129 - 10245.366: 80.9994% ( 48) 00:25:19.729 10245.366 - 10302.603: 81.3311% ( 45) 00:25:19.729 10302.603 - 10359.839: 81.6406% ( 42) 00:25:19.729 10359.839 - 10417.076: 81.9354% ( 40) 00:25:19.729 10417.076 - 10474.313: 82.2302% ( 40) 00:25:19.729 10474.313 - 10531.549: 82.5251% ( 40) 00:25:19.729 10531.549 - 10588.786: 82.8125% ( 39) 00:25:19.729 10588.786 - 10646.023: 83.1073% ( 40) 00:25:19.729 10646.023 - 10703.259: 83.3948% ( 39) 00:25:19.729 10703.259 - 10760.496: 83.6601% ( 36) 00:25:19.729 10760.496 - 10817.733: 83.9180% ( 35) 00:25:19.729 10817.733 - 10874.969: 84.1023% ( 25) 00:25:19.729 10874.969 - 10932.206: 84.3013% ( 27) 00:25:19.729 10932.206 - 10989.443: 84.5003% ( 27) 00:25:19.729 10989.443 - 11046.679: 84.6919% ( 26) 00:25:19.729 11046.679 - 11103.916: 84.8762% ( 25) 00:25:19.729 11103.916 - 11161.153: 85.0531% ( 24) 00:25:19.729 11161.153 - 11218.390: 85.2447% ( 26) 00:25:19.729 11218.390 - 11275.626: 85.4363% ( 26) 00:25:19.729 11275.626 - 11332.863: 85.6279% ( 26) 00:25:19.729 11332.863 - 11390.100: 85.8564% ( 31) 00:25:19.729 11390.100 - 11447.336: 86.0996% ( 33) 00:25:19.729 11447.336 - 11504.573: 86.3502% ( 34) 00:25:19.729 11504.573 - 11561.810: 86.6156% ( 36) 00:25:19.729 11561.810 - 11619.046: 86.8735% ( 35) 00:25:19.729 11619.046 - 11676.283: 87.0946% ( 30) 00:25:19.729 11676.283 - 11733.520: 87.3526% ( 35) 00:25:19.729 11733.520 - 11790.756: 87.5295% ( 24) 00:25:19.729 11790.756 - 11847.993: 87.7653% ( 32) 00:25:19.729 11847.993 - 11905.230: 87.9717% ( 28) 00:25:19.729 11905.230 - 11962.466: 88.1707% ( 27) 00:25:19.729 11962.466 - 12019.703: 88.3844% ( 29) 00:25:19.729 12019.703 - 12076.940: 88.5687% ( 25) 00:25:19.729 12076.940 - 12134.176: 88.7677% ( 27) 00:25:19.729 12134.176 - 12191.413: 88.9593% ( 26) 00:25:19.729 12191.413 - 12248.650: 89.1509% ( 26) 00:25:19.729 12248.650 - 12305.886: 89.3426% ( 26) 00:25:19.729 12305.886 - 12363.123: 89.5858% ( 33) 00:25:19.729 12363.123 - 12420.360: 89.7995% ( 29) 00:25:19.729 12420.360 - 12477.597: 89.9985% ( 27) 00:25:19.729 12477.597 - 12534.833: 90.1607% ( 22) 00:25:19.729 12534.833 - 12592.070: 90.3081% ( 20) 00:25:19.729 12592.070 - 12649.307: 90.4334% ( 17) 00:25:19.729 12649.307 - 12706.543: 90.5366% ( 14) 00:25:19.729 12706.543 - 12763.780: 90.6250% ( 12) 00:25:19.729 12763.780 - 12821.017: 90.7282% ( 14) 00:25:19.729 12821.017 - 12878.253: 90.8093% ( 11) 00:25:19.729 12878.253 - 12935.490: 90.8830% ( 10) 00:25:19.729 12935.490 - 12992.727: 90.9640% ( 11) 00:25:19.729 12992.727 - 13049.963: 91.0377% ( 10) 00:25:19.729 13049.963 - 13107.200: 91.1188% ( 11) 00:25:19.729 13107.200 - 13164.437: 91.1925% ( 10) 00:25:19.729 13164.437 - 13221.673: 91.2810% ( 12) 00:25:19.729 13221.673 - 13278.910: 91.3768% ( 13) 00:25:19.729 13278.910 - 13336.147: 91.4726% ( 13) 00:25:19.729 13336.147 - 13393.383: 91.5610% ( 12) 00:25:19.729 13393.383 - 13450.620: 91.6642% ( 14) 00:25:19.729 13450.620 - 13507.857: 91.7379% ( 10) 00:25:19.729 13507.857 - 13565.093: 91.7969% ( 8) 00:25:19.729 13565.093 - 13622.330: 91.8706% ( 10) 00:25:19.730 13622.330 - 13679.567: 91.9222% ( 7) 00:25:19.730 13679.567 - 13736.803: 91.9517% ( 4) 00:25:19.730 13736.803 - 13794.040: 91.9738% ( 3) 00:25:19.730 13794.040 - 13851.277: 92.0032% ( 4) 00:25:19.730 13851.277 - 13908.514: 92.0327% ( 4) 00:25:19.730 13908.514 - 13965.750: 92.0622% ( 4) 00:25:19.730 13965.750 - 14022.987: 92.0917% ( 4) 00:25:19.730 14022.987 - 14080.224: 92.1138% ( 3) 00:25:19.730 14080.224 - 14137.460: 92.1433% ( 4) 00:25:19.730 14137.460 - 14194.697: 92.1801% ( 5) 00:25:19.730 14194.697 - 14251.934: 92.2244% ( 6) 00:25:19.730 14251.934 - 14309.170: 92.2538% ( 4) 00:25:19.730 14309.170 - 14366.407: 92.2907% ( 5) 00:25:19.730 14366.407 - 14423.644: 92.3791% ( 12) 00:25:19.730 14423.644 - 14480.880: 92.4455% ( 9) 00:25:19.730 14480.880 - 14538.117: 92.5265% ( 11) 00:25:19.730 14538.117 - 14595.354: 92.6076% ( 11) 00:25:19.730 14595.354 - 14652.590: 92.6960% ( 12) 00:25:19.730 14652.590 - 14767.064: 92.8656% ( 23) 00:25:19.730 14767.064 - 14881.537: 93.0277% ( 22) 00:25:19.730 14881.537 - 14996.010: 93.2415% ( 29) 00:25:19.730 14996.010 - 15110.484: 93.4478% ( 28) 00:25:19.730 15110.484 - 15224.957: 93.6468% ( 27) 00:25:19.730 15224.957 - 15339.431: 93.8532% ( 28) 00:25:19.730 15339.431 - 15453.904: 94.0890% ( 32) 00:25:19.730 15453.904 - 15568.377: 94.3028% ( 29) 00:25:19.730 15568.377 - 15682.851: 94.4944% ( 26) 00:25:19.730 15682.851 - 15797.324: 94.7597% ( 36) 00:25:19.730 15797.324 - 15911.797: 94.9956% ( 32) 00:25:19.730 15911.797 - 16026.271: 95.2314% ( 32) 00:25:19.730 16026.271 - 16140.744: 95.3862% ( 21) 00:25:19.730 16140.744 - 16255.217: 95.5705% ( 25) 00:25:19.730 16255.217 - 16369.691: 95.7400% ( 23) 00:25:19.730 16369.691 - 16484.164: 95.9463% ( 28) 00:25:19.730 16484.164 - 16598.638: 96.1306% ( 25) 00:25:19.730 16598.638 - 16713.111: 96.3222% ( 26) 00:25:19.730 16713.111 - 16827.584: 96.5581% ( 32) 00:25:19.730 16827.584 - 16942.058: 96.7350% ( 24) 00:25:19.730 16942.058 - 17056.531: 96.9119% ( 24) 00:25:19.730 17056.531 - 17171.004: 97.0593% ( 20) 00:25:19.730 17171.004 - 17285.478: 97.2214% ( 22) 00:25:19.730 17285.478 - 17399.951: 97.3688% ( 20) 00:25:19.730 17399.951 - 17514.424: 97.5236% ( 21) 00:25:19.730 17514.424 - 17628.898: 97.7005% ( 24) 00:25:19.730 17628.898 - 17743.371: 97.8847% ( 25) 00:25:19.730 17743.371 - 17857.845: 97.9732% ( 12) 00:25:19.730 17857.845 - 17972.318: 98.0174% ( 6) 00:25:19.730 17972.318 - 18086.791: 98.0690% ( 7) 00:25:19.730 18086.791 - 18201.265: 98.1279% ( 8) 00:25:19.730 18201.265 - 18315.738: 98.1869% ( 8) 00:25:19.730 18315.738 - 18430.211: 98.2385% ( 7) 00:25:19.730 18430.211 - 18544.685: 98.3048% ( 9) 00:25:19.730 18544.685 - 18659.158: 98.3564% ( 7) 00:25:19.730 18659.158 - 18773.631: 98.4080% ( 7) 00:25:19.730 18773.631 - 18888.105: 98.4670% ( 8) 00:25:19.730 18888.105 - 19002.578: 98.5259% ( 8) 00:25:19.730 19002.578 - 19117.052: 98.5481% ( 3) 00:25:19.730 19117.052 - 19231.525: 98.5702% ( 3) 00:25:19.730 19231.525 - 19345.998: 98.5849% ( 2) 00:25:19.730 20490.732 - 20605.205: 98.6070% ( 3) 00:25:19.730 20605.205 - 20719.679: 98.6291% ( 3) 00:25:19.730 20719.679 - 20834.152: 98.6660% ( 5) 00:25:19.730 20834.152 - 20948.625: 98.7176% ( 7) 00:25:19.730 20948.625 - 21063.099: 98.7397% ( 3) 00:25:19.730 21063.099 - 21177.572: 98.7618% ( 3) 00:25:19.730 21177.572 - 21292.045: 98.7986% ( 5) 00:25:19.730 21292.045 - 21406.519: 98.8355% ( 5) 00:25:19.730 21406.519 - 21520.992: 98.8723% ( 5) 00:25:19.730 21520.992 - 21635.466: 98.9018% ( 4) 00:25:19.730 21635.466 - 21749.939: 98.9313% ( 4) 00:25:19.730 21749.939 - 21864.412: 98.9608% ( 4) 00:25:19.730 21864.412 - 21978.886: 98.9903% ( 4) 00:25:19.730 21978.886 - 22093.359: 99.0271% ( 5) 00:25:19.730 22093.359 - 22207.832: 99.0566% ( 4) 00:25:19.730 25413.086 - 25527.560: 99.0640% ( 1) 00:25:19.730 25527.560 - 25642.033: 99.0787% ( 2) 00:25:19.730 25642.033 - 25756.507: 99.0935% ( 2) 00:25:19.730 25756.507 - 25870.980: 99.1082% ( 2) 00:25:19.730 25870.980 - 25985.453: 99.1229% ( 2) 00:25:19.730 25985.453 - 26099.927: 99.1377% ( 2) 00:25:19.730 26099.927 - 26214.400: 99.1598% ( 3) 00:25:19.730 26214.400 - 26328.873: 99.1745% ( 2) 00:25:19.730 26328.873 - 26443.347: 99.1893% ( 2) 00:25:19.730 26443.347 - 26557.820: 99.2114% ( 3) 00:25:19.730 26557.820 - 26672.293: 99.2261% ( 2) 00:25:19.730 26672.293 - 26786.767: 99.2409% ( 2) 00:25:19.730 26786.767 - 26901.240: 99.2630% ( 3) 00:25:19.730 26901.240 - 27015.714: 99.2777% ( 2) 00:25:19.730 27015.714 - 27130.187: 99.2998% ( 3) 00:25:19.730 27130.187 - 27244.660: 99.3146% ( 2) 00:25:19.730 27244.660 - 27359.134: 99.3367% ( 3) 00:25:19.730 27359.134 - 27473.607: 99.3514% ( 2) 00:25:19.730 27473.607 - 27588.080: 99.3588% ( 1) 00:25:19.730 27588.080 - 27702.554: 99.3735% ( 2) 00:25:19.730 27702.554 - 27817.027: 99.3956% ( 3) 00:25:19.730 27817.027 - 27931.500: 99.4177% ( 3) 00:25:19.730 27931.500 - 28045.974: 99.4325% ( 2) 00:25:19.730 28045.974 - 28160.447: 99.4472% ( 2) 00:25:19.730 28160.447 - 28274.921: 99.4693% ( 3) 00:25:19.730 28274.921 - 28389.394: 99.4841% ( 2) 00:25:19.730 28389.394 - 28503.867: 99.5062% ( 3) 00:25:19.730 28503.867 - 28618.341: 99.5209% ( 2) 00:25:19.730 28618.341 - 28732.814: 99.5283% ( 1) 00:25:19.730 36631.476 - 36860.423: 99.5504% ( 3) 00:25:19.730 36860.423 - 37089.369: 99.5873% ( 5) 00:25:19.730 37089.369 - 37318.316: 99.6315% ( 6) 00:25:19.730 37318.316 - 37547.263: 99.6610% ( 4) 00:25:19.730 37547.263 - 37776.210: 99.7052% ( 6) 00:25:19.730 37776.210 - 38005.156: 99.7420% ( 5) 00:25:19.730 38005.156 - 38234.103: 99.7863% ( 6) 00:25:19.730 38234.103 - 38463.050: 99.8305% ( 6) 00:25:19.730 38463.050 - 38691.997: 99.8747% ( 6) 00:25:19.730 38691.997 - 38920.943: 99.9116% ( 5) 00:25:19.730 38920.943 - 39149.890: 99.9558% ( 6) 00:25:19.730 39149.890 - 39378.837: 99.9926% ( 5) 00:25:19.730 39378.837 - 39607.783: 100.0000% ( 1) 00:25:19.730 00:25:19.730 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:25:19.730 ============================================================================== 00:25:19.730 Range in us Cumulative IO count 00:25:19.730 7154.585 - 7183.203: 0.0442% ( 6) 00:25:19.730 7183.203 - 7211.822: 0.1032% ( 8) 00:25:19.730 7211.822 - 7240.440: 0.1179% ( 2) 00:25:19.730 7240.440 - 7269.059: 0.1621% ( 6) 00:25:19.730 7269.059 - 7297.677: 0.2432% ( 11) 00:25:19.730 7297.677 - 7326.295: 0.4054% ( 22) 00:25:19.730 7326.295 - 7383.532: 1.0982% ( 94) 00:25:19.730 7383.532 - 7440.769: 2.2479% ( 156) 00:25:19.730 7440.769 - 7498.005: 3.8915% ( 223) 00:25:19.730 7498.005 - 7555.242: 5.7488% ( 252) 00:25:19.730 7555.242 - 7612.479: 7.7830% ( 276) 00:25:19.730 7612.479 - 7669.715: 10.3184% ( 344) 00:25:19.730 7669.715 - 7726.952: 12.9643% ( 359) 00:25:19.730 7726.952 - 7784.189: 15.8535% ( 392) 00:25:19.730 7784.189 - 7841.425: 19.2070% ( 455) 00:25:19.730 7841.425 - 7898.662: 22.4204% ( 436) 00:25:19.730 7898.662 - 7955.899: 25.7075% ( 446) 00:25:19.730 7955.899 - 8013.135: 29.1790% ( 471) 00:25:19.730 8013.135 - 8070.372: 32.6282% ( 468) 00:25:19.730 8070.372 - 8127.609: 36.1660% ( 480) 00:25:19.730 8127.609 - 8184.845: 39.6816% ( 477) 00:25:19.730 8184.845 - 8242.082: 43.2930% ( 490) 00:25:19.730 8242.082 - 8299.319: 46.8455% ( 482) 00:25:19.730 8299.319 - 8356.555: 50.2358% ( 460) 00:25:19.730 8356.555 - 8413.792: 53.2871% ( 414) 00:25:19.730 8413.792 - 8471.029: 56.0805% ( 379) 00:25:19.730 8471.029 - 8528.266: 58.4979% ( 328) 00:25:19.730 8528.266 - 8585.502: 60.6353% ( 290) 00:25:19.730 8585.502 - 8642.739: 62.5000% ( 253) 00:25:19.730 8642.739 - 8699.976: 64.1952% ( 230) 00:25:19.730 8699.976 - 8757.212: 65.7134% ( 206) 00:25:19.730 8757.212 - 8814.449: 67.0254% ( 178) 00:25:19.730 8814.449 - 8871.686: 68.1972% ( 159) 00:25:19.730 8871.686 - 8928.922: 69.2659% ( 145) 00:25:19.730 8928.922 - 8986.159: 70.2978% ( 140) 00:25:19.730 8986.159 - 9043.396: 71.2264% ( 126) 00:25:19.730 9043.396 - 9100.632: 72.0445% ( 111) 00:25:19.730 9100.632 - 9157.869: 72.7963% ( 102) 00:25:19.730 9157.869 - 9215.106: 73.5112% ( 97) 00:25:19.730 9215.106 - 9272.342: 74.2188% ( 96) 00:25:19.730 9272.342 - 9329.579: 74.8379% ( 84) 00:25:19.730 9329.579 - 9386.816: 75.3759% ( 73) 00:25:19.730 9386.816 - 9444.052: 75.8476% ( 64) 00:25:19.730 9444.052 - 9501.289: 76.2751% ( 58) 00:25:19.730 9501.289 - 9558.526: 76.7173% ( 60) 00:25:19.730 9558.526 - 9615.762: 77.1595% ( 60) 00:25:19.730 9615.762 - 9672.999: 77.5427% ( 52) 00:25:19.730 9672.999 - 9730.236: 77.8228% ( 38) 00:25:19.730 9730.236 - 9787.472: 78.1840% ( 49) 00:25:19.730 9787.472 - 9844.709: 78.6041% ( 57) 00:25:19.730 9844.709 - 9901.946: 79.0537% ( 61) 00:25:19.730 9901.946 - 9959.183: 79.4369% ( 52) 00:25:19.730 9959.183 - 10016.419: 79.8644% ( 58) 00:25:19.730 10016.419 - 10073.656: 80.2476% ( 52) 00:25:19.730 10073.656 - 10130.893: 80.6456% ( 54) 00:25:19.730 10130.893 - 10188.129: 81.0584% ( 56) 00:25:19.730 10188.129 - 10245.366: 81.4858% ( 58) 00:25:19.730 10245.366 - 10302.603: 81.8691% ( 52) 00:25:19.730 10302.603 - 10359.839: 82.2302% ( 49) 00:25:19.730 10359.839 - 10417.076: 82.5472% ( 43) 00:25:19.730 10417.076 - 10474.313: 82.8199% ( 37) 00:25:19.730 10474.313 - 10531.549: 83.0778% ( 35) 00:25:19.730 10531.549 - 10588.786: 83.3726% ( 40) 00:25:19.731 10588.786 - 10646.023: 83.6601% ( 39) 00:25:19.731 10646.023 - 10703.259: 83.9328% ( 37) 00:25:19.731 10703.259 - 10760.496: 84.2276% ( 40) 00:25:19.731 10760.496 - 10817.733: 84.4929% ( 36) 00:25:19.731 10817.733 - 10874.969: 84.7656% ( 37) 00:25:19.731 10874.969 - 10932.206: 84.9867% ( 30) 00:25:19.731 10932.206 - 10989.443: 85.1931% ( 28) 00:25:19.731 10989.443 - 11046.679: 85.4068% ( 29) 00:25:19.731 11046.679 - 11103.916: 85.6353% ( 31) 00:25:19.731 11103.916 - 11161.153: 85.8712% ( 32) 00:25:19.731 11161.153 - 11218.390: 86.0775% ( 28) 00:25:19.731 11218.390 - 11275.626: 86.2986% ( 30) 00:25:19.731 11275.626 - 11332.863: 86.5050% ( 28) 00:25:19.731 11332.863 - 11390.100: 86.6819% ( 24) 00:25:19.731 11390.100 - 11447.336: 86.8662% ( 25) 00:25:19.731 11447.336 - 11504.573: 87.0578% ( 26) 00:25:19.731 11504.573 - 11561.810: 87.2494% ( 26) 00:25:19.731 11561.810 - 11619.046: 87.4189% ( 23) 00:25:19.731 11619.046 - 11676.283: 87.5663% ( 20) 00:25:19.731 11676.283 - 11733.520: 87.6916% ( 17) 00:25:19.731 11733.520 - 11790.756: 87.8022% ( 15) 00:25:19.731 11790.756 - 11847.993: 87.9054% ( 14) 00:25:19.731 11847.993 - 11905.230: 88.0012% ( 13) 00:25:19.731 11905.230 - 11962.466: 88.0896% ( 12) 00:25:19.731 11962.466 - 12019.703: 88.1854% ( 13) 00:25:19.731 12019.703 - 12076.940: 88.2591% ( 10) 00:25:19.731 12076.940 - 12134.176: 88.3771% ( 16) 00:25:19.731 12134.176 - 12191.413: 88.4655% ( 12) 00:25:19.731 12191.413 - 12248.650: 88.5466% ( 11) 00:25:19.731 12248.650 - 12305.886: 88.6055% ( 8) 00:25:19.731 12305.886 - 12363.123: 88.7014% ( 13) 00:25:19.731 12363.123 - 12420.360: 88.7824% ( 11) 00:25:19.731 12420.360 - 12477.597: 88.8856% ( 14) 00:25:19.731 12477.597 - 12534.833: 88.9741% ( 12) 00:25:19.731 12534.833 - 12592.070: 89.1067% ( 18) 00:25:19.731 12592.070 - 12649.307: 89.2541% ( 20) 00:25:19.731 12649.307 - 12706.543: 89.4089% ( 21) 00:25:19.731 12706.543 - 12763.780: 89.6300% ( 30) 00:25:19.731 12763.780 - 12821.017: 89.7774% ( 20) 00:25:19.731 12821.017 - 12878.253: 89.9027% ( 17) 00:25:19.731 12878.253 - 12935.490: 90.0427% ( 19) 00:25:19.731 12935.490 - 12992.727: 90.1754% ( 18) 00:25:19.731 12992.727 - 13049.963: 90.3302% ( 21) 00:25:19.731 13049.963 - 13107.200: 90.4776% ( 20) 00:25:19.731 13107.200 - 13164.437: 90.6397% ( 22) 00:25:19.731 13164.437 - 13221.673: 90.7871% ( 20) 00:25:19.731 13221.673 - 13278.910: 90.9272% ( 19) 00:25:19.731 13278.910 - 13336.147: 91.0746% ( 20) 00:25:19.731 13336.147 - 13393.383: 91.2220% ( 20) 00:25:19.731 13393.383 - 13450.620: 91.3694% ( 20) 00:25:19.731 13450.620 - 13507.857: 91.5168% ( 20) 00:25:19.731 13507.857 - 13565.093: 91.6568% ( 19) 00:25:19.731 13565.093 - 13622.330: 91.7379% ( 11) 00:25:19.731 13622.330 - 13679.567: 91.8190% ( 11) 00:25:19.731 13679.567 - 13736.803: 91.9001% ( 11) 00:25:19.731 13736.803 - 13794.040: 91.9959% ( 13) 00:25:19.731 13794.040 - 13851.277: 92.1064% ( 15) 00:25:19.731 13851.277 - 13908.514: 92.2096% ( 14) 00:25:19.731 13908.514 - 13965.750: 92.3128% ( 14) 00:25:19.731 13965.750 - 14022.987: 92.3939% ( 11) 00:25:19.731 14022.987 - 14080.224: 92.4676% ( 10) 00:25:19.731 14080.224 - 14137.460: 92.5708% ( 14) 00:25:19.731 14137.460 - 14194.697: 92.6666% ( 13) 00:25:19.731 14194.697 - 14251.934: 92.7550% ( 12) 00:25:19.731 14251.934 - 14309.170: 92.8213% ( 9) 00:25:19.731 14309.170 - 14366.407: 92.8877% ( 9) 00:25:19.731 14366.407 - 14423.644: 92.9835% ( 13) 00:25:19.731 14423.644 - 14480.880: 93.0498% ( 9) 00:25:19.731 14480.880 - 14538.117: 93.1235% ( 10) 00:25:19.731 14538.117 - 14595.354: 93.1972% ( 10) 00:25:19.731 14595.354 - 14652.590: 93.2783% ( 11) 00:25:19.731 14652.590 - 14767.064: 93.4331% ( 21) 00:25:19.731 14767.064 - 14881.537: 93.5731% ( 19) 00:25:19.731 14881.537 - 14996.010: 93.6837% ( 15) 00:25:19.731 14996.010 - 15110.484: 93.8237% ( 19) 00:25:19.731 15110.484 - 15224.957: 93.9269% ( 14) 00:25:19.731 15224.957 - 15339.431: 94.0522% ( 17) 00:25:19.731 15339.431 - 15453.904: 94.2217% ( 23) 00:25:19.731 15453.904 - 15568.377: 94.3838% ( 22) 00:25:19.731 15568.377 - 15682.851: 94.5607% ( 24) 00:25:19.731 15682.851 - 15797.324: 94.7376% ( 24) 00:25:19.731 15797.324 - 15911.797: 94.9366% ( 27) 00:25:19.731 15911.797 - 16026.271: 95.1430% ( 28) 00:25:19.731 16026.271 - 16140.744: 95.3272% ( 25) 00:25:19.731 16140.744 - 16255.217: 95.4673% ( 19) 00:25:19.731 16255.217 - 16369.691: 95.6073% ( 19) 00:25:19.731 16369.691 - 16484.164: 95.7768% ( 23) 00:25:19.731 16484.164 - 16598.638: 95.9242% ( 20) 00:25:19.731 16598.638 - 16713.111: 96.0716% ( 20) 00:25:19.731 16713.111 - 16827.584: 96.2412% ( 23) 00:25:19.731 16827.584 - 16942.058: 96.4033% ( 22) 00:25:19.731 16942.058 - 17056.531: 96.5654% ( 22) 00:25:19.731 17056.531 - 17171.004: 96.7202% ( 21) 00:25:19.731 17171.004 - 17285.478: 96.8750% ( 21) 00:25:19.731 17285.478 - 17399.951: 97.0519% ( 24) 00:25:19.731 17399.951 - 17514.424: 97.2361% ( 25) 00:25:19.731 17514.424 - 17628.898: 97.3835% ( 20) 00:25:19.731 17628.898 - 17743.371: 97.5457% ( 22) 00:25:19.731 17743.371 - 17857.845: 97.6489% ( 14) 00:25:19.731 17857.845 - 17972.318: 97.6710% ( 3) 00:25:19.731 17972.318 - 18086.791: 97.7447% ( 10) 00:25:19.731 18086.791 - 18201.265: 97.8479% ( 14) 00:25:19.731 18201.265 - 18315.738: 97.9511% ( 14) 00:25:19.731 18315.738 - 18430.211: 98.0469% ( 13) 00:25:19.731 18430.211 - 18544.685: 98.1353% ( 12) 00:25:19.731 18544.685 - 18659.158: 98.2090% ( 10) 00:25:19.731 18659.158 - 18773.631: 98.2975% ( 12) 00:25:19.731 18773.631 - 18888.105: 98.3785% ( 11) 00:25:19.731 18888.105 - 19002.578: 98.4596% ( 11) 00:25:19.731 19002.578 - 19117.052: 98.5333% ( 10) 00:25:19.731 19117.052 - 19231.525: 98.5849% ( 7) 00:25:19.731 20376.259 - 20490.732: 98.6070% ( 3) 00:25:19.731 20490.732 - 20605.205: 98.6218% ( 2) 00:25:19.731 20605.205 - 20719.679: 98.6365% ( 2) 00:25:19.731 20719.679 - 20834.152: 98.6512% ( 2) 00:25:19.731 20834.152 - 20948.625: 98.6733% ( 3) 00:25:19.731 20948.625 - 21063.099: 98.6955% ( 3) 00:25:19.731 21063.099 - 21177.572: 98.7102% ( 2) 00:25:19.731 21177.572 - 21292.045: 98.7323% ( 3) 00:25:19.731 21292.045 - 21406.519: 98.7544% ( 3) 00:25:19.731 21406.519 - 21520.992: 98.7692% ( 2) 00:25:19.731 21520.992 - 21635.466: 98.7913% ( 3) 00:25:19.731 21635.466 - 21749.939: 98.8134% ( 3) 00:25:19.731 21749.939 - 21864.412: 98.8355% ( 3) 00:25:19.731 21864.412 - 21978.886: 98.8576% ( 3) 00:25:19.731 21978.886 - 22093.359: 98.8723% ( 2) 00:25:19.731 22093.359 - 22207.832: 98.8945% ( 3) 00:25:19.731 22207.832 - 22322.306: 98.9166% ( 3) 00:25:19.731 22322.306 - 22436.779: 98.9387% ( 3) 00:25:19.731 22436.779 - 22551.252: 98.9608% ( 3) 00:25:19.731 22551.252 - 22665.726: 98.9755% ( 2) 00:25:19.731 22665.726 - 22780.199: 98.9976% ( 3) 00:25:19.731 22780.199 - 22894.672: 99.0198% ( 3) 00:25:19.731 22894.672 - 23009.146: 99.0345% ( 2) 00:25:19.731 23009.146 - 23123.619: 99.0492% ( 2) 00:25:19.731 23123.619 - 23238.093: 99.0566% ( 1) 00:25:19.731 23924.933 - 24039.406: 99.0713% ( 2) 00:25:19.731 24039.406 - 24153.879: 99.1082% ( 5) 00:25:19.731 24153.879 - 24268.353: 99.1229% ( 2) 00:25:19.731 24268.353 - 24382.826: 99.1303% ( 1) 00:25:19.731 24382.826 - 24497.300: 99.1450% ( 2) 00:25:19.731 24497.300 - 24611.773: 99.1598% ( 2) 00:25:19.731 24611.773 - 24726.246: 99.1745% ( 2) 00:25:19.731 24726.246 - 24840.720: 99.1893% ( 2) 00:25:19.731 24840.720 - 24955.193: 99.2114% ( 3) 00:25:19.731 24955.193 - 25069.666: 99.2261% ( 2) 00:25:19.731 25069.666 - 25184.140: 99.2630% ( 5) 00:25:19.731 25184.140 - 25298.613: 99.2777% ( 2) 00:25:19.731 25298.613 - 25413.086: 99.2925% ( 2) 00:25:19.731 25413.086 - 25527.560: 99.3072% ( 2) 00:25:19.731 25527.560 - 25642.033: 99.3219% ( 2) 00:25:19.731 25642.033 - 25756.507: 99.3440% ( 3) 00:25:19.731 25756.507 - 25870.980: 99.3514% ( 1) 00:25:19.731 25870.980 - 25985.453: 99.3662% ( 2) 00:25:19.731 25985.453 - 26099.927: 99.3809% ( 2) 00:25:19.731 26099.927 - 26214.400: 99.4030% ( 3) 00:25:19.731 26214.400 - 26328.873: 99.4177% ( 2) 00:25:19.731 26328.873 - 26443.347: 99.4325% ( 2) 00:25:19.731 26443.347 - 26557.820: 99.4472% ( 2) 00:25:19.731 26557.820 - 26672.293: 99.4693% ( 3) 00:25:19.731 26672.293 - 26786.767: 99.4841% ( 2) 00:25:19.731 26786.767 - 26901.240: 99.5062% ( 3) 00:25:19.731 26901.240 - 27015.714: 99.5209% ( 2) 00:25:19.731 27015.714 - 27130.187: 99.5283% ( 1) 00:25:19.731 34799.902 - 35028.849: 99.5652% ( 5) 00:25:19.731 35028.849 - 35257.796: 99.6020% ( 5) 00:25:19.731 35257.796 - 35486.742: 99.6389% ( 5) 00:25:19.731 35486.742 - 35715.689: 99.6757% ( 5) 00:25:19.731 35715.689 - 35944.636: 99.7199% ( 6) 00:25:19.731 35944.636 - 36173.583: 99.7568% ( 5) 00:25:19.731 36173.583 - 36402.529: 99.7936% ( 5) 00:25:19.731 36402.529 - 36631.476: 99.8379% ( 6) 00:25:19.731 36631.476 - 36860.423: 99.8747% ( 5) 00:25:19.731 36860.423 - 37089.369: 99.9189% ( 6) 00:25:19.731 37089.369 - 37318.316: 99.9558% ( 5) 00:25:19.731 37318.316 - 37547.263: 99.9926% ( 5) 00:25:19.731 37547.263 - 37776.210: 100.0000% ( 1) 00:25:19.731 00:25:19.731 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:25:19.731 ============================================================================== 00:25:19.731 Range in us Cumulative IO count 00:25:19.731 7125.967 - 7154.585: 0.0147% ( 2) 00:25:19.731 7154.585 - 7183.203: 0.0221% ( 1) 00:25:19.731 7183.203 - 7211.822: 0.0811% ( 8) 00:25:19.731 7211.822 - 7240.440: 0.1253% ( 6) 00:25:19.731 7240.440 - 7269.059: 0.1916% ( 9) 00:25:19.731 7269.059 - 7297.677: 0.3169% ( 17) 00:25:19.732 7297.677 - 7326.295: 0.5675% ( 34) 00:25:19.732 7326.295 - 7383.532: 1.1866% ( 84) 00:25:19.732 7383.532 - 7440.769: 2.2185% ( 140) 00:25:19.732 7440.769 - 7498.005: 3.7588% ( 209) 00:25:19.732 7498.005 - 7555.242: 5.4466% ( 229) 00:25:19.732 7555.242 - 7612.479: 7.8199% ( 322) 00:25:19.732 7612.479 - 7669.715: 10.3479% ( 343) 00:25:19.732 7669.715 - 7726.952: 13.2075% ( 388) 00:25:19.732 7726.952 - 7784.189: 16.1851% ( 404) 00:25:19.732 7784.189 - 7841.425: 19.5828% ( 461) 00:25:19.732 7841.425 - 7898.662: 22.9068% ( 451) 00:25:19.732 7898.662 - 7955.899: 26.1792% ( 444) 00:25:19.732 7955.899 - 8013.135: 29.4517% ( 444) 00:25:19.732 8013.135 - 8070.372: 32.8199% ( 457) 00:25:19.732 8070.372 - 8127.609: 36.1365% ( 450) 00:25:19.732 8127.609 - 8184.845: 39.7700% ( 493) 00:25:19.732 8184.845 - 8242.082: 43.2709% ( 475) 00:25:19.732 8242.082 - 8299.319: 46.8234% ( 482) 00:25:19.732 8299.319 - 8356.555: 50.1253% ( 448) 00:25:19.732 8356.555 - 8413.792: 53.1545% ( 411) 00:25:19.732 8413.792 - 8471.029: 55.8299% ( 363) 00:25:19.732 8471.029 - 8528.266: 58.2768% ( 332) 00:25:19.732 8528.266 - 8585.502: 60.4658% ( 297) 00:25:19.732 8585.502 - 8642.739: 62.3157% ( 251) 00:25:19.732 8642.739 - 8699.976: 63.9298% ( 219) 00:25:19.732 8699.976 - 8757.212: 65.3081% ( 187) 00:25:19.732 8757.212 - 8814.449: 66.5389% ( 167) 00:25:19.732 8814.449 - 8871.686: 67.6150% ( 146) 00:25:19.732 8871.686 - 8928.922: 68.6984% ( 147) 00:25:19.732 8928.922 - 8986.159: 69.6713% ( 132) 00:25:19.732 8986.159 - 9043.396: 70.6073% ( 127) 00:25:19.732 9043.396 - 9100.632: 71.4844% ( 119) 00:25:19.732 9100.632 - 9157.869: 72.3246% ( 114) 00:25:19.732 9157.869 - 9215.106: 73.1353% ( 110) 00:25:19.732 9215.106 - 9272.342: 73.8871% ( 102) 00:25:19.732 9272.342 - 9329.579: 74.5136% ( 85) 00:25:19.732 9329.579 - 9386.816: 75.0221% ( 69) 00:25:19.732 9386.816 - 9444.052: 75.5601% ( 73) 00:25:19.732 9444.052 - 9501.289: 76.0392% ( 65) 00:25:19.732 9501.289 - 9558.526: 76.4372% ( 54) 00:25:19.732 9558.526 - 9615.762: 76.7983% ( 49) 00:25:19.732 9615.762 - 9672.999: 77.2185% ( 57) 00:25:19.732 9672.999 - 9730.236: 77.5427% ( 44) 00:25:19.732 9730.236 - 9787.472: 77.9113% ( 50) 00:25:19.732 9787.472 - 9844.709: 78.3535% ( 60) 00:25:19.732 9844.709 - 9901.946: 78.7736% ( 57) 00:25:19.732 9901.946 - 9959.183: 79.1642% ( 53) 00:25:19.732 9959.183 - 10016.419: 79.5696% ( 55) 00:25:19.732 10016.419 - 10073.656: 80.0265% ( 62) 00:25:19.732 10073.656 - 10130.893: 80.4540% ( 58) 00:25:19.732 10130.893 - 10188.129: 80.8741% ( 57) 00:25:19.732 10188.129 - 10245.366: 81.3016% ( 58) 00:25:19.732 10245.366 - 10302.603: 81.6848% ( 52) 00:25:19.732 10302.603 - 10359.839: 82.1050% ( 57) 00:25:19.732 10359.839 - 10417.076: 82.4735% ( 50) 00:25:19.732 10417.076 - 10474.313: 82.7978% ( 44) 00:25:19.732 10474.313 - 10531.549: 83.1442% ( 47) 00:25:19.732 10531.549 - 10588.786: 83.4906% ( 47) 00:25:19.732 10588.786 - 10646.023: 83.8296% ( 46) 00:25:19.732 10646.023 - 10703.259: 84.1981% ( 50) 00:25:19.732 10703.259 - 10760.496: 84.5150% ( 43) 00:25:19.732 10760.496 - 10817.733: 84.9057% ( 53) 00:25:19.732 10817.733 - 10874.969: 85.1489% ( 33) 00:25:19.732 10874.969 - 10932.206: 85.4732% ( 44) 00:25:19.732 10932.206 - 10989.443: 85.7385% ( 36) 00:25:19.732 10989.443 - 11046.679: 85.9522% ( 29) 00:25:19.732 11046.679 - 11103.916: 86.1586% ( 28) 00:25:19.732 11103.916 - 11161.153: 86.3650% ( 28) 00:25:19.732 11161.153 - 11218.390: 86.5713% ( 28) 00:25:19.732 11218.390 - 11275.626: 86.7777% ( 28) 00:25:19.732 11275.626 - 11332.863: 86.9546% ( 24) 00:25:19.732 11332.863 - 11390.100: 87.1315% ( 24) 00:25:19.732 11390.100 - 11447.336: 87.2936% ( 22) 00:25:19.732 11447.336 - 11504.573: 87.4410% ( 20) 00:25:19.732 11504.573 - 11561.810: 87.6253% ( 25) 00:25:19.732 11561.810 - 11619.046: 87.7801% ( 21) 00:25:19.732 11619.046 - 11676.283: 87.9127% ( 18) 00:25:19.732 11676.283 - 11733.520: 88.0454% ( 18) 00:25:19.732 11733.520 - 11790.756: 88.1486% ( 14) 00:25:19.732 11790.756 - 11847.993: 88.2297% ( 11) 00:25:19.732 11847.993 - 11905.230: 88.3255% ( 13) 00:25:19.732 11905.230 - 11962.466: 88.4213% ( 13) 00:25:19.732 11962.466 - 12019.703: 88.5024% ( 11) 00:25:19.732 12019.703 - 12076.940: 88.5613% ( 8) 00:25:19.732 12076.940 - 12134.176: 88.6571% ( 13) 00:25:19.732 12134.176 - 12191.413: 88.7161% ( 8) 00:25:19.732 12191.413 - 12248.650: 88.7751% ( 8) 00:25:19.732 12248.650 - 12305.886: 88.8340% ( 8) 00:25:19.732 12305.886 - 12363.123: 88.9077% ( 10) 00:25:19.732 12363.123 - 12420.360: 88.9593% ( 7) 00:25:19.732 12420.360 - 12477.597: 89.0183% ( 8) 00:25:19.732 12477.597 - 12534.833: 89.0625% ( 6) 00:25:19.732 12534.833 - 12592.070: 89.1067% ( 6) 00:25:19.732 12592.070 - 12649.307: 89.1436% ( 5) 00:25:19.732 12649.307 - 12706.543: 89.1731% ( 4) 00:25:19.732 12706.543 - 12763.780: 89.2025% ( 4) 00:25:19.732 12763.780 - 12821.017: 89.2394% ( 5) 00:25:19.732 12821.017 - 12878.253: 89.2689% ( 4) 00:25:19.732 12878.253 - 12935.490: 89.3131% ( 6) 00:25:19.732 12935.490 - 12992.727: 89.3721% ( 8) 00:25:19.732 12992.727 - 13049.963: 89.4310% ( 8) 00:25:19.732 13049.963 - 13107.200: 89.5121% ( 11) 00:25:19.732 13107.200 - 13164.437: 89.6005% ( 12) 00:25:19.732 13164.437 - 13221.673: 89.6669% ( 9) 00:25:19.732 13221.673 - 13278.910: 89.7479% ( 11) 00:25:19.732 13278.910 - 13336.147: 89.9027% ( 21) 00:25:19.732 13336.147 - 13393.383: 90.0206% ( 16) 00:25:19.732 13393.383 - 13450.620: 90.1607% ( 19) 00:25:19.732 13450.620 - 13507.857: 90.3081% ( 20) 00:25:19.732 13507.857 - 13565.093: 90.5144% ( 28) 00:25:19.732 13565.093 - 13622.330: 90.6913% ( 24) 00:25:19.732 13622.330 - 13679.567: 90.9124% ( 30) 00:25:19.732 13679.567 - 13736.803: 91.0967% ( 25) 00:25:19.732 13736.803 - 13794.040: 91.2883% ( 26) 00:25:19.732 13794.040 - 13851.277: 91.4800% ( 26) 00:25:19.732 13851.277 - 13908.514: 91.6495% ( 23) 00:25:19.732 13908.514 - 13965.750: 91.8411% ( 26) 00:25:19.732 13965.750 - 14022.987: 92.0254% ( 25) 00:25:19.732 14022.987 - 14080.224: 92.2022% ( 24) 00:25:19.732 14080.224 - 14137.460: 92.3791% ( 24) 00:25:19.732 14137.460 - 14194.697: 92.5634% ( 25) 00:25:19.732 14194.697 - 14251.934: 92.6813% ( 16) 00:25:19.732 14251.934 - 14309.170: 92.8213% ( 19) 00:25:19.732 14309.170 - 14366.407: 92.9761% ( 21) 00:25:19.732 14366.407 - 14423.644: 93.1456% ( 23) 00:25:19.732 14423.644 - 14480.880: 93.2930% ( 20) 00:25:19.732 14480.880 - 14538.117: 93.4036% ( 15) 00:25:19.732 14538.117 - 14595.354: 93.5142% ( 15) 00:25:19.732 14595.354 - 14652.590: 93.5879% ( 10) 00:25:19.732 14652.590 - 14767.064: 93.7574% ( 23) 00:25:19.732 14767.064 - 14881.537: 93.9269% ( 23) 00:25:19.732 14881.537 - 14996.010: 94.1038% ( 24) 00:25:19.732 14996.010 - 15110.484: 94.2807% ( 24) 00:25:19.732 15110.484 - 15224.957: 94.4797% ( 27) 00:25:19.732 15224.957 - 15339.431: 94.6713% ( 26) 00:25:19.732 15339.431 - 15453.904: 94.8555% ( 25) 00:25:19.732 15453.904 - 15568.377: 95.0029% ( 20) 00:25:19.732 15568.377 - 15682.851: 95.1651% ( 22) 00:25:19.732 15682.851 - 15797.324: 95.3199% ( 21) 00:25:19.732 15797.324 - 15911.797: 95.4673% ( 20) 00:25:19.732 15911.797 - 16026.271: 95.5926% ( 17) 00:25:19.732 16026.271 - 16140.744: 95.6958% ( 14) 00:25:19.732 16140.744 - 16255.217: 95.7916% ( 13) 00:25:19.732 16255.217 - 16369.691: 95.9169% ( 17) 00:25:19.732 16369.691 - 16484.164: 96.0716% ( 21) 00:25:19.732 16484.164 - 16598.638: 96.1896% ( 16) 00:25:19.732 16598.638 - 16713.111: 96.2927% ( 14) 00:25:19.732 16713.111 - 16827.584: 96.4402% ( 20) 00:25:19.732 16827.584 - 16942.058: 96.5876% ( 20) 00:25:19.732 16942.058 - 17056.531: 96.7276% ( 19) 00:25:19.732 17056.531 - 17171.004: 96.8750% ( 20) 00:25:19.732 17171.004 - 17285.478: 97.0298% ( 21) 00:25:19.733 17285.478 - 17399.951: 97.1698% ( 19) 00:25:19.733 17399.951 - 17514.424: 97.2730% ( 14) 00:25:19.733 17514.424 - 17628.898: 97.3762% ( 14) 00:25:19.733 17628.898 - 17743.371: 97.4573% ( 11) 00:25:19.733 17743.371 - 17857.845: 97.5457% ( 12) 00:25:19.733 17857.845 - 17972.318: 97.6268% ( 11) 00:25:19.733 17972.318 - 18086.791: 97.6931% ( 9) 00:25:19.733 18086.791 - 18201.265: 97.7373% ( 6) 00:25:19.733 18201.265 - 18315.738: 97.8405% ( 14) 00:25:19.733 18315.738 - 18430.211: 97.9142% ( 10) 00:25:19.733 18430.211 - 18544.685: 98.0027% ( 12) 00:25:19.733 18544.685 - 18659.158: 98.0985% ( 13) 00:25:19.733 18659.158 - 18773.631: 98.1869% ( 12) 00:25:19.733 18773.631 - 18888.105: 98.2680% ( 11) 00:25:19.733 18888.105 - 19002.578: 98.3564% ( 12) 00:25:19.733 19002.578 - 19117.052: 98.4596% ( 14) 00:25:19.733 19117.052 - 19231.525: 98.5259% ( 9) 00:25:19.733 19231.525 - 19345.998: 98.5775% ( 7) 00:25:19.733 19345.998 - 19460.472: 98.5849% ( 1) 00:25:19.733 20261.785 - 20376.259: 98.6144% ( 4) 00:25:19.733 20376.259 - 20490.732: 98.6733% ( 8) 00:25:19.733 20490.732 - 20605.205: 98.6807% ( 1) 00:25:19.733 20605.205 - 20719.679: 98.7028% ( 3) 00:25:19.733 20719.679 - 20834.152: 98.7323% ( 4) 00:25:19.733 20834.152 - 20948.625: 98.7544% ( 3) 00:25:19.733 20948.625 - 21063.099: 98.7618% ( 1) 00:25:19.733 21063.099 - 21177.572: 98.7839% ( 3) 00:25:19.733 21177.572 - 21292.045: 98.8060% ( 3) 00:25:19.733 21292.045 - 21406.519: 98.8208% ( 2) 00:25:19.733 21406.519 - 21520.992: 98.8429% ( 3) 00:25:19.733 21520.992 - 21635.466: 98.8650% ( 3) 00:25:19.733 21635.466 - 21749.939: 98.8871% ( 3) 00:25:19.733 21749.939 - 21864.412: 98.9092% ( 3) 00:25:19.733 21864.412 - 21978.886: 98.9239% ( 2) 00:25:19.733 21978.886 - 22093.359: 98.9387% ( 2) 00:25:19.733 22093.359 - 22207.832: 98.9608% ( 3) 00:25:19.733 22207.832 - 22322.306: 98.9976% ( 5) 00:25:19.733 22322.306 - 22436.779: 99.0345% ( 5) 00:25:19.733 22436.779 - 22551.252: 99.0713% ( 5) 00:25:19.733 22551.252 - 22665.726: 99.1008% ( 4) 00:25:19.733 22665.726 - 22780.199: 99.1450% ( 6) 00:25:19.733 22780.199 - 22894.672: 99.1672% ( 3) 00:25:19.733 22894.672 - 23009.146: 99.1819% ( 2) 00:25:19.733 23009.146 - 23123.619: 99.1966% ( 2) 00:25:19.733 23123.619 - 23238.093: 99.2188% ( 3) 00:25:19.733 23238.093 - 23352.566: 99.2335% ( 2) 00:25:19.733 23352.566 - 23467.039: 99.2409% ( 1) 00:25:19.733 23467.039 - 23581.513: 99.2482% ( 1) 00:25:19.733 23581.513 - 23695.986: 99.2703% ( 3) 00:25:19.733 23695.986 - 23810.459: 99.2851% ( 2) 00:25:19.733 23810.459 - 23924.933: 99.3072% ( 3) 00:25:19.733 23924.933 - 24039.406: 99.3219% ( 2) 00:25:19.733 24039.406 - 24153.879: 99.3440% ( 3) 00:25:19.733 24153.879 - 24268.353: 99.3588% ( 2) 00:25:19.733 24268.353 - 24382.826: 99.3735% ( 2) 00:25:19.733 24382.826 - 24497.300: 99.3883% ( 2) 00:25:19.733 24497.300 - 24611.773: 99.4104% ( 3) 00:25:19.733 24611.773 - 24726.246: 99.4251% ( 2) 00:25:19.733 24726.246 - 24840.720: 99.4399% ( 2) 00:25:19.733 24840.720 - 24955.193: 99.4546% ( 2) 00:25:19.733 24955.193 - 25069.666: 99.4767% ( 3) 00:25:19.733 25069.666 - 25184.140: 99.4915% ( 2) 00:25:19.733 25184.140 - 25298.613: 99.5136% ( 3) 00:25:19.733 25298.613 - 25413.086: 99.5283% ( 2) 00:25:19.733 32510.435 - 32739.382: 99.5504% ( 3) 00:25:19.733 32739.382 - 32968.328: 99.5873% ( 5) 00:25:19.733 32968.328 - 33197.275: 99.6315% ( 6) 00:25:19.733 33197.275 - 33426.222: 99.6683% ( 5) 00:25:19.733 33426.222 - 33655.169: 99.7052% ( 5) 00:25:19.733 33655.169 - 33884.115: 99.7494% ( 6) 00:25:19.733 33884.115 - 34113.062: 99.7863% ( 5) 00:25:19.733 34113.062 - 34342.009: 99.8231% ( 5) 00:25:19.733 34342.009 - 34570.955: 99.8673% ( 6) 00:25:19.733 34570.955 - 34799.902: 99.9042% ( 5) 00:25:19.733 34799.902 - 35028.849: 99.9410% ( 5) 00:25:19.733 35028.849 - 35257.796: 99.9853% ( 6) 00:25:19.733 35257.796 - 35486.742: 100.0000% ( 2) 00:25:19.733 00:25:19.733 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:25:19.733 ============================================================================== 00:25:19.733 Range in us Cumulative IO count 00:25:19.733 7097.348 - 7125.967: 0.0074% ( 1) 00:25:19.733 7125.967 - 7154.585: 0.0221% ( 2) 00:25:19.733 7154.585 - 7183.203: 0.0516% ( 4) 00:25:19.733 7183.203 - 7211.822: 0.0811% ( 4) 00:25:19.733 7211.822 - 7240.440: 0.1253% ( 6) 00:25:19.733 7240.440 - 7269.059: 0.2285% ( 14) 00:25:19.733 7269.059 - 7297.677: 0.3685% ( 19) 00:25:19.733 7297.677 - 7326.295: 0.5233% ( 21) 00:25:19.733 7326.295 - 7383.532: 1.1498% ( 85) 00:25:19.733 7383.532 - 7440.769: 2.2479% ( 149) 00:25:19.733 7440.769 - 7498.005: 3.7810% ( 208) 00:25:19.733 7498.005 - 7555.242: 5.7415% ( 266) 00:25:19.733 7555.242 - 7612.479: 7.9231% ( 296) 00:25:19.733 7612.479 - 7669.715: 10.6058% ( 364) 00:25:19.733 7669.715 - 7726.952: 13.5908% ( 405) 00:25:19.733 7726.952 - 7784.189: 16.6863% ( 420) 00:25:19.733 7784.189 - 7841.425: 19.9071% ( 437) 00:25:19.733 7841.425 - 7898.662: 23.0174% ( 422) 00:25:19.733 7898.662 - 7955.899: 26.2824% ( 443) 00:25:19.733 7955.899 - 8013.135: 29.6506% ( 457) 00:25:19.733 8013.135 - 8070.372: 33.0483% ( 461) 00:25:19.733 8070.372 - 8127.609: 36.4460% ( 461) 00:25:19.733 8127.609 - 8184.845: 39.9838% ( 480) 00:25:19.733 8184.845 - 8242.082: 43.5805% ( 488) 00:25:19.733 8242.082 - 8299.319: 47.0740% ( 474) 00:25:19.733 8299.319 - 8356.555: 50.3759% ( 448) 00:25:19.733 8356.555 - 8413.792: 53.3977% ( 410) 00:25:19.733 8413.792 - 8471.029: 56.0805% ( 364) 00:25:19.733 8471.029 - 8528.266: 58.5569% ( 336) 00:25:19.733 8528.266 - 8585.502: 60.6869% ( 289) 00:25:19.733 8585.502 - 8642.739: 62.4779% ( 243) 00:25:19.733 8642.739 - 8699.976: 64.1288% ( 224) 00:25:19.733 8699.976 - 8757.212: 65.4997% ( 186) 00:25:19.733 8757.212 - 8814.449: 66.6347% ( 154) 00:25:19.733 8814.449 - 8871.686: 67.6813% ( 142) 00:25:19.733 8871.686 - 8928.922: 68.6468% ( 131) 00:25:19.733 8928.922 - 8986.159: 69.5386% ( 121) 00:25:19.733 8986.159 - 9043.396: 70.4157% ( 119) 00:25:19.733 9043.396 - 9100.632: 71.2706% ( 116) 00:25:19.733 9100.632 - 9157.869: 72.0298% ( 103) 00:25:19.733 9157.869 - 9215.106: 72.8479% ( 111) 00:25:19.733 9215.106 - 9272.342: 73.5481% ( 95) 00:25:19.733 9272.342 - 9329.579: 74.1377% ( 80) 00:25:19.733 9329.579 - 9386.816: 74.6094% ( 64) 00:25:19.733 9386.816 - 9444.052: 75.1253% ( 70) 00:25:19.733 9444.052 - 9501.289: 75.5675% ( 60) 00:25:19.733 9501.289 - 9558.526: 75.9213% ( 48) 00:25:19.733 9558.526 - 9615.762: 76.2603% ( 46) 00:25:19.733 9615.762 - 9672.999: 76.5994% ( 46) 00:25:19.733 9672.999 - 9730.236: 76.8868% ( 39) 00:25:19.733 9730.236 - 9787.472: 77.2553% ( 50) 00:25:19.733 9787.472 - 9844.709: 77.6459% ( 53) 00:25:19.733 9844.709 - 9901.946: 78.1103% ( 63) 00:25:19.733 9901.946 - 9959.183: 78.5525% ( 60) 00:25:19.733 9959.183 - 10016.419: 78.9947% ( 60) 00:25:19.733 10016.419 - 10073.656: 79.4517% ( 62) 00:25:19.733 10073.656 - 10130.893: 79.8939% ( 60) 00:25:19.733 10130.893 - 10188.129: 80.3361% ( 60) 00:25:19.733 10188.129 - 10245.366: 80.8078% ( 64) 00:25:19.733 10245.366 - 10302.603: 81.2721% ( 63) 00:25:19.733 10302.603 - 10359.839: 81.7364% ( 63) 00:25:19.733 10359.839 - 10417.076: 82.1344% ( 54) 00:25:19.733 10417.076 - 10474.313: 82.4956% ( 49) 00:25:19.733 10474.313 - 10531.549: 82.8567% ( 49) 00:25:19.733 10531.549 - 10588.786: 83.2105% ( 48) 00:25:19.733 10588.786 - 10646.023: 83.5643% ( 48) 00:25:19.733 10646.023 - 10703.259: 83.9770% ( 56) 00:25:19.733 10703.259 - 10760.496: 84.3750% ( 54) 00:25:19.733 10760.496 - 10817.733: 84.7435% ( 50) 00:25:19.733 10817.733 - 10874.969: 85.0899% ( 47) 00:25:19.733 10874.969 - 10932.206: 85.3552% ( 36) 00:25:19.733 10932.206 - 10989.443: 85.6353% ( 38) 00:25:19.733 10989.443 - 11046.679: 85.9080% ( 37) 00:25:19.733 11046.679 - 11103.916: 86.1365% ( 31) 00:25:19.733 11103.916 - 11161.153: 86.3576% ( 30) 00:25:19.733 11161.153 - 11218.390: 86.6008% ( 33) 00:25:19.733 11218.390 - 11275.626: 86.8367% ( 32) 00:25:19.733 11275.626 - 11332.863: 87.0652% ( 31) 00:25:19.733 11332.863 - 11390.100: 87.2642% ( 27) 00:25:19.733 11390.100 - 11447.336: 87.4558% ( 26) 00:25:19.733 11447.336 - 11504.573: 87.6548% ( 27) 00:25:19.733 11504.573 - 11561.810: 87.8317% ( 24) 00:25:19.733 11561.810 - 11619.046: 88.0528% ( 30) 00:25:19.733 11619.046 - 11676.283: 88.2370% ( 25) 00:25:19.733 11676.283 - 11733.520: 88.3918% ( 21) 00:25:19.733 11733.520 - 11790.756: 88.5245% ( 18) 00:25:19.733 11790.756 - 11847.993: 88.6129% ( 12) 00:25:19.733 11847.993 - 11905.230: 88.7235% ( 15) 00:25:19.733 11905.230 - 11962.466: 88.8045% ( 11) 00:25:19.733 11962.466 - 12019.703: 88.8930% ( 12) 00:25:19.733 12019.703 - 12076.940: 88.9667% ( 10) 00:25:19.733 12076.940 - 12134.176: 89.0404% ( 10) 00:25:19.733 12134.176 - 12191.413: 89.1141% ( 10) 00:25:19.733 12191.413 - 12248.650: 89.1583% ( 6) 00:25:19.733 12248.650 - 12305.886: 89.2025% ( 6) 00:25:19.733 12305.886 - 12363.123: 89.2394% ( 5) 00:25:19.733 12363.123 - 12420.360: 89.2541% ( 2) 00:25:19.733 12420.360 - 12477.597: 89.2910% ( 5) 00:25:19.733 12477.597 - 12534.833: 89.3205% ( 4) 00:25:19.733 12534.833 - 12592.070: 89.3499% ( 4) 00:25:19.733 12592.070 - 12649.307: 89.4015% ( 7) 00:25:19.733 12649.307 - 12706.543: 89.4458% ( 6) 00:25:19.733 12706.543 - 12763.780: 89.4826% ( 5) 00:25:19.733 12763.780 - 12821.017: 89.5342% ( 7) 00:25:19.733 12821.017 - 12878.253: 89.6005% ( 9) 00:25:19.733 12878.253 - 12935.490: 89.6890% ( 12) 00:25:19.733 12935.490 - 12992.727: 89.7553% ( 9) 00:25:19.734 12992.727 - 13049.963: 89.8438% ( 12) 00:25:19.734 13049.963 - 13107.200: 89.9396% ( 13) 00:25:19.734 13107.200 - 13164.437: 90.0206% ( 11) 00:25:19.734 13164.437 - 13221.673: 90.1091% ( 12) 00:25:19.734 13221.673 - 13278.910: 90.1754% ( 9) 00:25:19.734 13278.910 - 13336.147: 90.2491% ( 10) 00:25:19.734 13336.147 - 13393.383: 90.3081% ( 8) 00:25:19.734 13393.383 - 13450.620: 90.4113% ( 14) 00:25:19.734 13450.620 - 13507.857: 90.5071% ( 13) 00:25:19.734 13507.857 - 13565.093: 90.6250% ( 16) 00:25:19.734 13565.093 - 13622.330: 90.7650% ( 19) 00:25:19.734 13622.330 - 13679.567: 90.9124% ( 20) 00:25:19.734 13679.567 - 13736.803: 91.0598% ( 20) 00:25:19.734 13736.803 - 13794.040: 91.1999% ( 19) 00:25:19.734 13794.040 - 13851.277: 91.3178% ( 16) 00:25:19.734 13851.277 - 13908.514: 91.4652% ( 20) 00:25:19.734 13908.514 - 13965.750: 91.6347% ( 23) 00:25:19.734 13965.750 - 14022.987: 91.8337% ( 27) 00:25:19.734 14022.987 - 14080.224: 91.9590% ( 17) 00:25:19.734 14080.224 - 14137.460: 92.0843% ( 17) 00:25:19.734 14137.460 - 14194.697: 92.2022% ( 16) 00:25:19.734 14194.697 - 14251.934: 92.3423% ( 19) 00:25:19.734 14251.934 - 14309.170: 92.4749% ( 18) 00:25:19.734 14309.170 - 14366.407: 92.6076% ( 18) 00:25:19.734 14366.407 - 14423.644: 92.7403% ( 18) 00:25:19.734 14423.644 - 14480.880: 92.8508% ( 15) 00:25:19.734 14480.880 - 14538.117: 93.0130% ( 22) 00:25:19.734 14538.117 - 14595.354: 93.1677% ( 21) 00:25:19.734 14595.354 - 14652.590: 93.3299% ( 22) 00:25:19.734 14652.590 - 14767.064: 93.6100% ( 38) 00:25:19.734 14767.064 - 14881.537: 93.8900% ( 38) 00:25:19.734 14881.537 - 14996.010: 94.1259% ( 32) 00:25:19.734 14996.010 - 15110.484: 94.3249% ( 27) 00:25:19.734 15110.484 - 15224.957: 94.5534% ( 31) 00:25:19.734 15224.957 - 15339.431: 94.7597% ( 28) 00:25:19.734 15339.431 - 15453.904: 94.9145% ( 21) 00:25:19.734 15453.904 - 15568.377: 95.0472% ( 18) 00:25:19.734 15568.377 - 15682.851: 95.1651% ( 16) 00:25:19.734 15682.851 - 15797.324: 95.3567% ( 26) 00:25:19.734 15797.324 - 15911.797: 95.5336% ( 24) 00:25:19.734 15911.797 - 16026.271: 95.7179% ( 25) 00:25:19.734 16026.271 - 16140.744: 95.8948% ( 24) 00:25:19.734 16140.744 - 16255.217: 96.0790% ( 25) 00:25:19.734 16255.217 - 16369.691: 96.2485% ( 23) 00:25:19.734 16369.691 - 16484.164: 96.4328% ( 25) 00:25:19.734 16484.164 - 16598.638: 96.5654% ( 18) 00:25:19.734 16598.638 - 16713.111: 96.7129% ( 20) 00:25:19.734 16713.111 - 16827.584: 96.8308% ( 16) 00:25:19.734 16827.584 - 16942.058: 96.8897% ( 8) 00:25:19.734 16942.058 - 17056.531: 96.9487% ( 8) 00:25:19.734 17056.531 - 17171.004: 97.0003% ( 7) 00:25:19.734 17171.004 - 17285.478: 97.0740% ( 10) 00:25:19.734 17285.478 - 17399.951: 97.1330% ( 8) 00:25:19.734 17399.951 - 17514.424: 97.1846% ( 7) 00:25:19.734 17514.424 - 17628.898: 97.2435% ( 8) 00:25:19.734 17628.898 - 17743.371: 97.3025% ( 8) 00:25:19.734 17743.371 - 17857.845: 97.3614% ( 8) 00:25:19.734 17857.845 - 17972.318: 97.4204% ( 8) 00:25:19.734 17972.318 - 18086.791: 97.4794% ( 8) 00:25:19.734 18086.791 - 18201.265: 97.5604% ( 11) 00:25:19.734 18201.265 - 18315.738: 97.6415% ( 11) 00:25:19.734 18315.738 - 18430.211: 97.7005% ( 8) 00:25:19.734 18430.211 - 18544.685: 97.7668% ( 9) 00:25:19.734 18544.685 - 18659.158: 97.8479% ( 11) 00:25:19.734 18659.158 - 18773.631: 97.9216% ( 10) 00:25:19.734 18773.631 - 18888.105: 97.9879% ( 9) 00:25:19.734 18888.105 - 19002.578: 98.0764% ( 12) 00:25:19.734 19002.578 - 19117.052: 98.1501% ( 10) 00:25:19.734 19117.052 - 19231.525: 98.2311% ( 11) 00:25:19.734 19231.525 - 19345.998: 98.3122% ( 11) 00:25:19.734 19345.998 - 19460.472: 98.3859% ( 10) 00:25:19.734 19460.472 - 19574.945: 98.4670% ( 11) 00:25:19.734 19574.945 - 19689.418: 98.5554% ( 12) 00:25:19.734 19689.418 - 19803.892: 98.6070% ( 7) 00:25:19.734 19803.892 - 19918.365: 98.6365% ( 4) 00:25:19.734 19918.365 - 20032.838: 98.6586% ( 3) 00:25:19.734 20032.838 - 20147.312: 98.6733% ( 2) 00:25:19.734 20147.312 - 20261.785: 98.7028% ( 4) 00:25:19.734 20261.785 - 20376.259: 98.7176% ( 2) 00:25:19.734 20376.259 - 20490.732: 98.7618% ( 6) 00:25:19.734 20490.732 - 20605.205: 98.7913% ( 4) 00:25:19.734 20605.205 - 20719.679: 98.8281% ( 5) 00:25:19.734 20719.679 - 20834.152: 98.8650% ( 5) 00:25:19.734 20834.152 - 20948.625: 98.9018% ( 5) 00:25:19.734 20948.625 - 21063.099: 98.9313% ( 4) 00:25:19.734 21063.099 - 21177.572: 98.9608% ( 4) 00:25:19.734 21177.572 - 21292.045: 99.0050% ( 6) 00:25:19.734 21292.045 - 21406.519: 99.0345% ( 4) 00:25:19.734 21406.519 - 21520.992: 99.0713% ( 5) 00:25:19.734 21520.992 - 21635.466: 99.1008% ( 4) 00:25:19.734 21635.466 - 21749.939: 99.1450% ( 6) 00:25:19.734 21749.939 - 21864.412: 99.1819% ( 5) 00:25:19.734 21864.412 - 21978.886: 99.2188% ( 5) 00:25:19.734 21978.886 - 22093.359: 99.2630% ( 6) 00:25:19.734 22093.359 - 22207.832: 99.2925% ( 4) 00:25:19.734 22207.832 - 22322.306: 99.3219% ( 4) 00:25:19.734 22322.306 - 22436.779: 99.3514% ( 4) 00:25:19.734 22436.779 - 22551.252: 99.3662% ( 2) 00:25:19.734 22551.252 - 22665.726: 99.3883% ( 3) 00:25:19.734 22665.726 - 22780.199: 99.4030% ( 2) 00:25:19.734 22780.199 - 22894.672: 99.4177% ( 2) 00:25:19.734 22894.672 - 23009.146: 99.4399% ( 3) 00:25:19.734 23009.146 - 23123.619: 99.4546% ( 2) 00:25:19.734 23123.619 - 23238.093: 99.4693% ( 2) 00:25:19.734 23238.093 - 23352.566: 99.4915% ( 3) 00:25:19.734 23352.566 - 23467.039: 99.5062% ( 2) 00:25:19.734 23467.039 - 23581.513: 99.5209% ( 2) 00:25:19.734 23581.513 - 23695.986: 99.5283% ( 1) 00:25:19.734 30220.968 - 30449.914: 99.5504% ( 3) 00:25:19.734 30449.914 - 30678.861: 99.5873% ( 5) 00:25:19.734 30678.861 - 30907.808: 99.6315% ( 6) 00:25:19.734 30907.808 - 31136.755: 99.6683% ( 5) 00:25:19.734 31136.755 - 31365.701: 99.7052% ( 5) 00:25:19.734 31365.701 - 31594.648: 99.7420% ( 5) 00:25:19.734 31594.648 - 31823.595: 99.7863% ( 6) 00:25:19.734 31823.595 - 32052.541: 99.8231% ( 5) 00:25:19.734 32052.541 - 32281.488: 99.8600% ( 5) 00:25:19.734 32281.488 - 32510.435: 99.8894% ( 4) 00:25:19.734 32510.435 - 32739.382: 99.9263% ( 5) 00:25:19.734 32739.382 - 32968.328: 99.9631% ( 5) 00:25:19.734 32968.328 - 33197.275: 100.0000% ( 5) 00:25:19.734 00:25:19.734 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:25:19.734 ============================================================================== 00:25:19.734 Range in us Cumulative IO count 00:25:19.734 7068.730 - 7097.348: 0.0147% ( 2) 00:25:19.734 7097.348 - 7125.967: 0.0221% ( 1) 00:25:19.734 7125.967 - 7154.585: 0.0369% ( 2) 00:25:19.734 7154.585 - 7183.203: 0.0516% ( 2) 00:25:19.734 7183.203 - 7211.822: 0.0737% ( 3) 00:25:19.734 7211.822 - 7240.440: 0.1400% ( 9) 00:25:19.734 7240.440 - 7269.059: 0.2580% ( 16) 00:25:19.734 7269.059 - 7297.677: 0.4201% ( 22) 00:25:19.734 7297.677 - 7326.295: 0.6117% ( 26) 00:25:19.734 7326.295 - 7383.532: 1.1498% ( 73) 00:25:19.734 7383.532 - 7440.769: 2.1079% ( 130) 00:25:19.734 7440.769 - 7498.005: 3.6704% ( 212) 00:25:19.734 7498.005 - 7555.242: 5.8225% ( 292) 00:25:19.734 7555.242 - 7612.479: 8.2326% ( 327) 00:25:19.734 7612.479 - 7669.715: 10.8859% ( 360) 00:25:19.734 7669.715 - 7726.952: 13.5908% ( 367) 00:25:19.734 7726.952 - 7784.189: 16.5905% ( 407) 00:25:19.734 7784.189 - 7841.425: 19.8261% ( 439) 00:25:19.734 7841.425 - 7898.662: 23.0616% ( 439) 00:25:19.734 7898.662 - 7955.899: 26.4225% ( 456) 00:25:19.734 7955.899 - 8013.135: 29.7981% ( 458) 00:25:19.734 8013.135 - 8070.372: 33.1810% ( 459) 00:25:19.734 8070.372 - 8127.609: 36.7335% ( 482) 00:25:19.734 8127.609 - 8184.845: 40.3597% ( 492) 00:25:19.734 8184.845 - 8242.082: 43.8606% ( 475) 00:25:19.734 8242.082 - 8299.319: 47.4425% ( 486) 00:25:19.734 8299.319 - 8356.555: 50.7739% ( 452) 00:25:19.734 8356.555 - 8413.792: 53.8104% ( 412) 00:25:19.734 8413.792 - 8471.029: 56.6111% ( 380) 00:25:19.734 8471.029 - 8528.266: 59.0065% ( 325) 00:25:19.734 8528.266 - 8585.502: 60.9449% ( 263) 00:25:19.734 8585.502 - 8642.739: 62.6695% ( 234) 00:25:19.734 8642.739 - 8699.976: 64.2983% ( 221) 00:25:19.734 8699.976 - 8757.212: 65.6029% ( 177) 00:25:19.734 8757.212 - 8814.449: 66.7232% ( 152) 00:25:19.734 8814.449 - 8871.686: 67.7919% ( 145) 00:25:19.734 8871.686 - 8928.922: 68.8163% ( 139) 00:25:19.734 8928.922 - 8986.159: 69.7671% ( 129) 00:25:19.734 8986.159 - 9043.396: 70.6442% ( 119) 00:25:19.734 9043.396 - 9100.632: 71.4696% ( 112) 00:25:19.734 9100.632 - 9157.869: 72.2804% ( 110) 00:25:19.734 9157.869 - 9215.106: 72.9658% ( 93) 00:25:19.734 9215.106 - 9272.342: 73.6807% ( 97) 00:25:19.734 9272.342 - 9329.579: 74.2851% ( 82) 00:25:19.734 9329.579 - 9386.816: 74.8526% ( 77) 00:25:19.734 9386.816 - 9444.052: 75.3096% ( 62) 00:25:19.734 9444.052 - 9501.289: 75.7149% ( 55) 00:25:19.734 9501.289 - 9558.526: 76.0761% ( 49) 00:25:19.734 9558.526 - 9615.762: 76.4225% ( 47) 00:25:19.734 9615.762 - 9672.999: 76.7099% ( 39) 00:25:19.734 9672.999 - 9730.236: 76.9752% ( 36) 00:25:19.734 9730.236 - 9787.472: 77.2627% ( 39) 00:25:19.734 9787.472 - 9844.709: 77.6017% ( 46) 00:25:19.734 9844.709 - 9901.946: 77.9923% ( 53) 00:25:19.734 9901.946 - 9959.183: 78.3240% ( 45) 00:25:19.734 9959.183 - 10016.419: 78.6999% ( 51) 00:25:19.734 10016.419 - 10073.656: 79.0610% ( 49) 00:25:19.734 10073.656 - 10130.893: 79.4369% ( 51) 00:25:19.734 10130.893 - 10188.129: 79.8128% ( 51) 00:25:19.734 10188.129 - 10245.366: 80.2403% ( 58) 00:25:19.734 10245.366 - 10302.603: 80.6604% ( 57) 00:25:19.735 10302.603 - 10359.839: 81.0584% ( 54) 00:25:19.735 10359.839 - 10417.076: 81.4932% ( 59) 00:25:19.735 10417.076 - 10474.313: 81.9354% ( 60) 00:25:19.735 10474.313 - 10531.549: 82.3408% ( 55) 00:25:19.735 10531.549 - 10588.786: 82.7462% ( 55) 00:25:19.735 10588.786 - 10646.023: 83.1442% ( 54) 00:25:19.735 10646.023 - 10703.259: 83.5274% ( 52) 00:25:19.735 10703.259 - 10760.496: 83.9770% ( 61) 00:25:19.735 10760.496 - 10817.733: 84.3971% ( 57) 00:25:19.735 10817.733 - 10874.969: 84.8025% ( 55) 00:25:19.735 10874.969 - 10932.206: 85.1415% ( 46) 00:25:19.735 10932.206 - 10989.443: 85.4290% ( 39) 00:25:19.735 10989.443 - 11046.679: 85.6943% ( 36) 00:25:19.735 11046.679 - 11103.916: 85.9301% ( 32) 00:25:19.735 11103.916 - 11161.153: 86.1881% ( 35) 00:25:19.735 11161.153 - 11218.390: 86.4387% ( 34) 00:25:19.735 11218.390 - 11275.626: 86.7188% ( 38) 00:25:19.735 11275.626 - 11332.863: 86.9620% ( 33) 00:25:19.735 11332.863 - 11390.100: 87.1683% ( 28) 00:25:19.735 11390.100 - 11447.336: 87.3673% ( 27) 00:25:19.735 11447.336 - 11504.573: 87.5442% ( 24) 00:25:19.735 11504.573 - 11561.810: 87.7211% ( 24) 00:25:19.735 11561.810 - 11619.046: 87.8906% ( 23) 00:25:19.735 11619.046 - 11676.283: 88.0454% ( 21) 00:25:19.735 11676.283 - 11733.520: 88.1854% ( 19) 00:25:19.735 11733.520 - 11790.756: 88.3107% ( 17) 00:25:19.735 11790.756 - 11847.993: 88.4287% ( 16) 00:25:19.735 11847.993 - 11905.230: 88.5540% ( 17) 00:25:19.735 11905.230 - 11962.466: 88.6719% ( 16) 00:25:19.735 11962.466 - 12019.703: 88.7677% ( 13) 00:25:19.735 12019.703 - 12076.940: 88.8635% ( 13) 00:25:19.735 12076.940 - 12134.176: 88.9814% ( 16) 00:25:19.735 12134.176 - 12191.413: 89.1067% ( 17) 00:25:19.735 12191.413 - 12248.650: 89.2246% ( 16) 00:25:19.735 12248.650 - 12305.886: 89.3278% ( 14) 00:25:19.735 12305.886 - 12363.123: 89.4458% ( 16) 00:25:19.735 12363.123 - 12420.360: 89.5489% ( 14) 00:25:19.735 12420.360 - 12477.597: 89.6521% ( 14) 00:25:19.735 12477.597 - 12534.833: 89.7553% ( 14) 00:25:19.735 12534.833 - 12592.070: 89.8953% ( 19) 00:25:19.735 12592.070 - 12649.307: 90.0280% ( 18) 00:25:19.735 12649.307 - 12706.543: 90.1091% ( 11) 00:25:19.735 12706.543 - 12763.780: 90.1902% ( 11) 00:25:19.735 12763.780 - 12821.017: 90.2933% ( 14) 00:25:19.735 12821.017 - 12878.253: 90.3744% ( 11) 00:25:19.735 12878.253 - 12935.490: 90.4702% ( 13) 00:25:19.735 12935.490 - 12992.727: 90.5881% ( 16) 00:25:19.735 12992.727 - 13049.963: 90.6913% ( 14) 00:25:19.735 13049.963 - 13107.200: 90.8093% ( 16) 00:25:19.735 13107.200 - 13164.437: 90.9124% ( 14) 00:25:19.735 13164.437 - 13221.673: 91.0451% ( 18) 00:25:19.735 13221.673 - 13278.910: 91.1409% ( 13) 00:25:19.735 13278.910 - 13336.147: 91.2588% ( 16) 00:25:19.735 13336.147 - 13393.383: 91.3399% ( 11) 00:25:19.735 13393.383 - 13450.620: 91.4136% ( 10) 00:25:19.735 13450.620 - 13507.857: 91.5021% ( 12) 00:25:19.735 13507.857 - 13565.093: 91.5831% ( 11) 00:25:19.735 13565.093 - 13622.330: 91.6716% ( 12) 00:25:19.735 13622.330 - 13679.567: 91.7527% ( 11) 00:25:19.735 13679.567 - 13736.803: 91.8116% ( 8) 00:25:19.735 13736.803 - 13794.040: 91.8632% ( 7) 00:25:19.735 13794.040 - 13851.277: 91.9295% ( 9) 00:25:19.735 13851.277 - 13908.514: 91.9811% ( 7) 00:25:19.735 13908.514 - 13965.750: 92.0475% ( 9) 00:25:19.735 13965.750 - 14022.987: 92.1064% ( 8) 00:25:19.735 14022.987 - 14080.224: 92.1580% ( 7) 00:25:19.735 14080.224 - 14137.460: 92.2317% ( 10) 00:25:19.735 14137.460 - 14194.697: 92.3054% ( 10) 00:25:19.735 14194.697 - 14251.934: 92.3718% ( 9) 00:25:19.735 14251.934 - 14309.170: 92.4233% ( 7) 00:25:19.735 14309.170 - 14366.407: 92.5044% ( 11) 00:25:19.735 14366.407 - 14423.644: 92.6002% ( 13) 00:25:19.735 14423.644 - 14480.880: 92.7034% ( 14) 00:25:19.735 14480.880 - 14538.117: 92.7919% ( 12) 00:25:19.735 14538.117 - 14595.354: 92.9024% ( 15) 00:25:19.735 14595.354 - 14652.590: 93.0056% ( 14) 00:25:19.735 14652.590 - 14767.064: 93.1751% ( 23) 00:25:19.735 14767.064 - 14881.537: 93.3446% ( 23) 00:25:19.735 14881.537 - 14996.010: 93.5142% ( 23) 00:25:19.735 14996.010 - 15110.484: 93.6910% ( 24) 00:25:19.735 15110.484 - 15224.957: 93.8900% ( 27) 00:25:19.735 15224.957 - 15339.431: 94.1406% ( 34) 00:25:19.735 15339.431 - 15453.904: 94.3617% ( 30) 00:25:19.735 15453.904 - 15568.377: 94.6344% ( 37) 00:25:19.735 15568.377 - 15682.851: 94.9292% ( 40) 00:25:19.735 15682.851 - 15797.324: 95.1651% ( 32) 00:25:19.735 15797.324 - 15911.797: 95.4378% ( 37) 00:25:19.735 15911.797 - 16026.271: 95.7326% ( 40) 00:25:19.735 16026.271 - 16140.744: 95.9611% ( 31) 00:25:19.735 16140.744 - 16255.217: 96.1748% ( 29) 00:25:19.735 16255.217 - 16369.691: 96.3443% ( 23) 00:25:19.735 16369.691 - 16484.164: 96.4623% ( 16) 00:25:19.735 16484.164 - 16598.638: 96.5360% ( 10) 00:25:19.735 16598.638 - 16713.111: 96.6023% ( 9) 00:25:19.735 16713.111 - 16827.584: 96.6760% ( 10) 00:25:19.735 16827.584 - 16942.058: 96.7497% ( 10) 00:25:19.735 16942.058 - 17056.531: 96.8455% ( 13) 00:25:19.735 17056.531 - 17171.004: 96.9340% ( 12) 00:25:19.735 17171.004 - 17285.478: 97.0150% ( 11) 00:25:19.735 17285.478 - 17399.951: 97.0887% ( 10) 00:25:19.735 17399.951 - 17514.424: 97.1846% ( 13) 00:25:19.735 17514.424 - 17628.898: 97.2730% ( 12) 00:25:19.735 17628.898 - 17743.371: 97.3467% ( 10) 00:25:19.735 17743.371 - 17857.845: 97.4130% ( 9) 00:25:19.735 17857.845 - 17972.318: 97.4646% ( 7) 00:25:19.735 17972.318 - 18086.791: 97.5162% ( 7) 00:25:19.735 18086.791 - 18201.265: 97.5531% ( 5) 00:25:19.735 18201.265 - 18315.738: 97.5899% ( 5) 00:25:19.735 18315.738 - 18430.211: 97.6268% ( 5) 00:25:19.735 18430.211 - 18544.685: 97.6415% ( 2) 00:25:19.735 18773.631 - 18888.105: 97.6562% ( 2) 00:25:19.735 18888.105 - 19002.578: 97.7005% ( 6) 00:25:19.735 19002.578 - 19117.052: 97.8184% ( 16) 00:25:19.735 19117.052 - 19231.525: 97.9437% ( 17) 00:25:19.735 19231.525 - 19345.998: 98.0395% ( 13) 00:25:19.735 19345.998 - 19460.472: 98.1501% ( 15) 00:25:19.735 19460.472 - 19574.945: 98.2754% ( 17) 00:25:19.735 19574.945 - 19689.418: 98.3859% ( 15) 00:25:19.735 19689.418 - 19803.892: 98.5186% ( 18) 00:25:19.735 19803.892 - 19918.365: 98.6365% ( 16) 00:25:19.735 19918.365 - 20032.838: 98.7618% ( 17) 00:25:19.735 20032.838 - 20147.312: 98.8576% ( 13) 00:25:19.735 20147.312 - 20261.785: 98.9313% ( 10) 00:25:19.735 20261.785 - 20376.259: 99.0050% ( 10) 00:25:19.735 20376.259 - 20490.732: 99.0713% ( 9) 00:25:19.735 20490.732 - 20605.205: 99.1303% ( 8) 00:25:19.735 20605.205 - 20719.679: 99.1819% ( 7) 00:25:19.735 20719.679 - 20834.152: 99.2261% ( 6) 00:25:19.735 20834.152 - 20948.625: 99.2630% ( 5) 00:25:19.735 20948.625 - 21063.099: 99.2998% ( 5) 00:25:19.735 21063.099 - 21177.572: 99.3367% ( 5) 00:25:19.735 21177.572 - 21292.045: 99.3735% ( 5) 00:25:19.735 21292.045 - 21406.519: 99.4177% ( 6) 00:25:19.735 21406.519 - 21520.992: 99.4546% ( 5) 00:25:19.735 21520.992 - 21635.466: 99.4767% ( 3) 00:25:19.735 21635.466 - 21749.939: 99.4988% ( 3) 00:25:19.735 21749.939 - 21864.412: 99.5136% ( 2) 00:25:19.735 21864.412 - 21978.886: 99.5283% ( 2) 00:25:19.735 28160.447 - 28274.921: 99.5504% ( 3) 00:25:19.735 28274.921 - 28389.394: 99.5652% ( 2) 00:25:19.735 28389.394 - 28503.867: 99.5873% ( 3) 00:25:19.735 28503.867 - 28618.341: 99.6020% ( 2) 00:25:19.735 28618.341 - 28732.814: 99.6241% ( 3) 00:25:19.735 28732.814 - 28847.287: 99.6462% ( 3) 00:25:19.735 28847.287 - 28961.761: 99.6683% ( 3) 00:25:19.735 28961.761 - 29076.234: 99.6904% ( 3) 00:25:19.735 29076.234 - 29190.707: 99.7126% ( 3) 00:25:19.735 29190.707 - 29305.181: 99.7347% ( 3) 00:25:19.735 29305.181 - 29534.128: 99.7715% ( 5) 00:25:19.735 29534.128 - 29763.074: 99.8157% ( 6) 00:25:19.735 29763.074 - 29992.021: 99.8526% ( 5) 00:25:19.735 29992.021 - 30220.968: 99.8968% ( 6) 00:25:19.735 30220.968 - 30449.914: 99.9337% ( 5) 00:25:19.735 30449.914 - 30678.861: 99.9705% ( 5) 00:25:19.735 30678.861 - 30907.808: 100.0000% ( 4) 00:25:19.735 00:25:19.735 13:45:27 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:25:21.118 Initializing NVMe Controllers 00:25:21.118 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:25:21.118 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:25:21.118 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:25:21.118 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:25:21.118 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:25:21.118 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:25:21.118 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:25:21.118 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:25:21.118 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:25:21.118 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:25:21.118 Initialization complete. Launching workers. 00:25:21.118 ======================================================== 00:25:21.118 Latency(us) 00:25:21.118 Device Information : IOPS MiB/s Average min max 00:25:21.118 PCIE (0000:00:10.0) NSID 1 from core 0: 9039.96 105.94 14197.38 9428.62 54317.22 00:25:21.118 PCIE (0000:00:11.0) NSID 1 from core 0: 9039.96 105.94 14163.35 9137.85 51757.98 00:25:21.118 PCIE (0000:00:13.0) NSID 1 from core 0: 9039.96 105.94 14129.87 9152.67 50137.84 00:25:21.118 PCIE (0000:00:12.0) NSID 1 from core 0: 9039.96 105.94 14097.10 9031.37 47908.36 00:25:21.118 PCIE (0000:00:12.0) NSID 2 from core 0: 9039.96 105.94 14065.57 9360.22 45282.04 00:25:21.118 PCIE (0000:00:12.0) NSID 3 from core 0: 9039.96 105.94 14033.70 8988.88 42659.85 00:25:21.118 ======================================================== 00:25:21.118 Total : 54239.77 635.62 14114.50 8988.88 54317.22 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9844.709us 00:25:21.118 10.00000% : 10760.496us 00:25:21.118 25.00000% : 11962.466us 00:25:21.118 50.00000% : 13336.147us 00:25:21.118 75.00000% : 15682.851us 00:25:21.118 90.00000% : 17399.951us 00:25:21.118 95.00000% : 18659.158us 00:25:21.118 98.00000% : 20376.259us 00:25:21.118 99.00000% : 41210.410us 00:25:21.118 99.50000% : 51970.907us 00:25:21.118 99.90000% : 53802.480us 00:25:21.118 99.99000% : 54489.321us 00:25:21.118 99.99900% : 54489.321us 00:25:21.118 99.99990% : 54489.321us 00:25:21.118 99.99999% : 54489.321us 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9787.472us 00:25:21.118 10.00000% : 10874.969us 00:25:21.118 25.00000% : 11905.230us 00:25:21.118 50.00000% : 13278.910us 00:25:21.118 75.00000% : 15797.324us 00:25:21.118 90.00000% : 17628.898us 00:25:21.118 95.00000% : 18430.211us 00:25:21.118 98.00000% : 20032.838us 00:25:21.118 99.00000% : 40065.677us 00:25:21.118 99.50000% : 49910.386us 00:25:21.118 99.90000% : 51513.013us 00:25:21.118 99.99000% : 51970.907us 00:25:21.118 99.99900% : 51970.907us 00:25:21.118 99.99990% : 51970.907us 00:25:21.118 99.99999% : 51970.907us 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9558.526us 00:25:21.118 10.00000% : 10932.206us 00:25:21.118 25.00000% : 11905.230us 00:25:21.118 50.00000% : 13336.147us 00:25:21.118 75.00000% : 15682.851us 00:25:21.118 90.00000% : 17514.424us 00:25:21.118 95.00000% : 18315.738us 00:25:21.118 98.00000% : 21406.519us 00:25:21.118 99.00000% : 38691.997us 00:25:21.118 99.50000% : 48078.812us 00:25:21.118 99.90000% : 49910.386us 00:25:21.118 99.99000% : 50139.333us 00:25:21.118 99.99900% : 50139.333us 00:25:21.118 99.99990% : 50139.333us 00:25:21.118 99.99999% : 50139.333us 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9615.762us 00:25:21.118 10.00000% : 10932.206us 00:25:21.118 25.00000% : 11790.756us 00:25:21.118 50.00000% : 13336.147us 00:25:21.118 75.00000% : 15568.377us 00:25:21.118 90.00000% : 17628.898us 00:25:21.118 95.00000% : 18544.685us 00:25:21.118 98.00000% : 21063.099us 00:25:21.118 99.00000% : 36402.529us 00:25:21.118 99.50000% : 45789.345us 00:25:21.118 99.90000% : 47620.919us 00:25:21.118 99.99000% : 48078.812us 00:25:21.118 99.99900% : 48078.812us 00:25:21.118 99.99990% : 48078.812us 00:25:21.118 99.99999% : 48078.812us 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9730.236us 00:25:21.118 10.00000% : 10817.733us 00:25:21.118 25.00000% : 11847.993us 00:25:21.118 50.00000% : 13278.910us 00:25:21.118 75.00000% : 15568.377us 00:25:21.118 90.00000% : 17399.951us 00:25:21.118 95.00000% : 18201.265us 00:25:21.118 98.00000% : 20719.679us 00:25:21.118 99.00000% : 34570.955us 00:25:21.118 99.50000% : 42126.197us 00:25:21.118 99.90000% : 45102.505us 00:25:21.118 99.99000% : 45331.452us 00:25:21.118 99.99900% : 45331.452us 00:25:21.118 99.99990% : 45331.452us 00:25:21.118 99.99999% : 45331.452us 00:25:21.118 00:25:21.118 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:25:21.118 ================================================================================= 00:25:21.118 1.00000% : 9787.472us 00:25:21.118 10.00000% : 10989.443us 00:25:21.118 25.00000% : 11905.230us 00:25:21.118 50.00000% : 13221.673us 00:25:21.118 75.00000% : 15568.377us 00:25:21.118 90.00000% : 17399.951us 00:25:21.118 95.00000% : 18201.265us 00:25:21.118 98.00000% : 20834.152us 00:25:21.118 99.00000% : 32510.435us 00:25:21.118 99.50000% : 39607.783us 00:25:21.118 99.90000% : 42355.144us 00:25:21.118 99.99000% : 42813.038us 00:25:21.118 99.99900% : 42813.038us 00:25:21.118 99.99990% : 42813.038us 00:25:21.118 99.99999% : 42813.038us 00:25:21.118 00:25:21.118 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:25:21.118 ============================================================================== 00:25:21.118 Range in us Cumulative IO count 00:25:21.118 9386.816 - 9444.052: 0.0220% ( 2) 00:25:21.118 9444.052 - 9501.289: 0.0550% ( 3) 00:25:21.118 9501.289 - 9558.526: 0.1540% ( 9) 00:25:21.118 9558.526 - 9615.762: 0.2751% ( 11) 00:25:21.118 9615.762 - 9672.999: 0.5062% ( 21) 00:25:21.118 9672.999 - 9730.236: 0.7042% ( 18) 00:25:21.118 9730.236 - 9787.472: 0.8913% ( 17) 00:25:21.118 9787.472 - 9844.709: 1.2104% ( 29) 00:25:21.118 9844.709 - 9901.946: 1.4855% ( 25) 00:25:21.118 9901.946 - 9959.183: 1.8156% ( 30) 00:25:21.118 9959.183 - 10016.419: 2.2887% ( 43) 00:25:21.118 10016.419 - 10073.656: 2.8279% ( 49) 00:25:21.118 10073.656 - 10130.893: 3.3231% ( 45) 00:25:21.118 10130.893 - 10188.129: 3.7962% ( 43) 00:25:21.118 10188.129 - 10245.366: 4.3464% ( 50) 00:25:21.118 10245.366 - 10302.603: 4.8195% ( 43) 00:25:21.118 10302.603 - 10359.839: 5.2707% ( 41) 00:25:21.118 10359.839 - 10417.076: 5.8869% ( 56) 00:25:21.118 10417.076 - 10474.313: 6.4261% ( 49) 00:25:21.118 10474.313 - 10531.549: 7.0202% ( 54) 00:25:21.118 10531.549 - 10588.786: 7.6585% ( 58) 00:25:21.118 10588.786 - 10646.023: 8.4067% ( 68) 00:25:21.118 10646.023 - 10703.259: 9.2430% ( 76) 00:25:21.118 10703.259 - 10760.496: 10.0352% ( 72) 00:25:21.118 10760.496 - 10817.733: 10.7064% ( 61) 00:25:21.118 10817.733 - 10874.969: 11.4217% ( 65) 00:25:21.118 10874.969 - 10932.206: 12.1809% ( 69) 00:25:21.118 10932.206 - 10989.443: 12.7641% ( 53) 00:25:21.118 10989.443 - 11046.679: 13.5013% ( 67) 00:25:21.118 11046.679 - 11103.916: 14.2276% ( 66) 00:25:21.118 11103.916 - 11161.153: 14.9648% ( 67) 00:25:21.118 11161.153 - 11218.390: 15.4820% ( 47) 00:25:21.118 11218.390 - 11275.626: 15.8121% ( 30) 00:25:21.118 11275.626 - 11332.863: 16.3292% ( 47) 00:25:21.118 11332.863 - 11390.100: 16.7804% ( 41) 00:25:21.118 11390.100 - 11447.336: 17.3636% ( 53) 00:25:21.118 11447.336 - 11504.573: 18.4199% ( 96) 00:25:21.118 11504.573 - 11561.810: 19.1461% ( 66) 00:25:21.118 11561.810 - 11619.046: 20.0814% ( 85) 00:25:21.118 11619.046 - 11676.283: 21.0057% ( 84) 00:25:21.118 11676.283 - 11733.520: 21.7980% ( 72) 00:25:21.118 11733.520 - 11790.756: 22.6122% ( 74) 00:25:21.118 11790.756 - 11847.993: 23.5805% ( 88) 00:25:21.118 11847.993 - 11905.230: 24.6369% ( 96) 00:25:21.118 11905.230 - 11962.466: 25.7812% ( 104) 00:25:21.118 11962.466 - 12019.703: 26.9586% ( 107) 00:25:21.119 12019.703 - 12076.940: 27.9489% ( 90) 00:25:21.119 12076.940 - 12134.176: 28.8292% ( 80) 00:25:21.119 12134.176 - 12191.413: 29.6325% ( 73) 00:25:21.119 12191.413 - 12248.650: 30.6008% ( 88) 00:25:21.119 12248.650 - 12305.886: 31.7011% ( 100) 00:25:21.119 12305.886 - 12363.123: 33.0326% ( 121) 00:25:21.119 12363.123 - 12420.360: 34.0889% ( 96) 00:25:21.119 12420.360 - 12477.597: 35.1232% ( 94) 00:25:21.119 12477.597 - 12534.833: 36.0805% ( 87) 00:25:21.119 12534.833 - 12592.070: 37.2909% ( 110) 00:25:21.119 12592.070 - 12649.307: 38.5233% ( 112) 00:25:21.119 12649.307 - 12706.543: 39.6347% ( 101) 00:25:21.119 12706.543 - 12763.780: 40.8231% ( 108) 00:25:21.119 12763.780 - 12821.017: 41.8574% ( 94) 00:25:21.119 12821.017 - 12878.253: 42.9798% ( 102) 00:25:21.119 12878.253 - 12935.490: 43.9701% ( 90) 00:25:21.119 12935.490 - 12992.727: 44.9164% ( 86) 00:25:21.119 12992.727 - 13049.963: 45.9507% ( 94) 00:25:21.119 13049.963 - 13107.200: 46.7210% ( 70) 00:25:21.119 13107.200 - 13164.437: 47.9423% ( 111) 00:25:21.119 13164.437 - 13221.673: 48.9547% ( 92) 00:25:21.119 13221.673 - 13278.910: 49.9230% ( 88) 00:25:21.119 13278.910 - 13336.147: 50.9133% ( 90) 00:25:21.119 13336.147 - 13393.383: 51.6725% ( 69) 00:25:21.119 13393.383 - 13450.620: 52.7399% ( 97) 00:25:21.119 13450.620 - 13507.857: 53.6532% ( 83) 00:25:21.119 13507.857 - 13565.093: 54.3134% ( 60) 00:25:21.119 13565.093 - 13622.330: 54.8966% ( 53) 00:25:21.119 13622.330 - 13679.567: 55.6228% ( 66) 00:25:21.119 13679.567 - 13736.803: 56.4481% ( 75) 00:25:21.119 13736.803 - 13794.040: 57.1413% ( 63) 00:25:21.119 13794.040 - 13851.277: 57.7245% ( 53) 00:25:21.119 13851.277 - 13908.514: 58.3187% ( 54) 00:25:21.119 13908.514 - 13965.750: 59.0889% ( 70) 00:25:21.119 13965.750 - 14022.987: 59.6171% ( 48) 00:25:21.119 14022.987 - 14080.224: 60.2553% ( 58) 00:25:21.119 14080.224 - 14137.460: 60.7614% ( 46) 00:25:21.119 14137.460 - 14194.697: 61.2566% ( 45) 00:25:21.119 14194.697 - 14251.934: 61.9168% ( 60) 00:25:21.119 14251.934 - 14309.170: 62.5550% ( 58) 00:25:21.119 14309.170 - 14366.407: 63.3253% ( 70) 00:25:21.119 14366.407 - 14423.644: 63.9635% ( 58) 00:25:21.119 14423.644 - 14480.880: 64.6457% ( 62) 00:25:21.119 14480.880 - 14538.117: 65.2179% ( 52) 00:25:21.119 14538.117 - 14595.354: 65.7460% ( 48) 00:25:21.119 14595.354 - 14652.590: 66.5823% ( 76) 00:25:21.119 14652.590 - 14767.064: 67.6937% ( 101) 00:25:21.119 14767.064 - 14881.537: 68.6950% ( 91) 00:25:21.119 14881.537 - 14996.010: 69.6853% ( 90) 00:25:21.119 14996.010 - 15110.484: 70.6976% ( 92) 00:25:21.119 15110.484 - 15224.957: 71.4789% ( 71) 00:25:21.119 15224.957 - 15339.431: 72.2601% ( 71) 00:25:21.119 15339.431 - 15453.904: 73.4265% ( 106) 00:25:21.119 15453.904 - 15568.377: 74.4168% ( 90) 00:25:21.119 15568.377 - 15682.851: 75.4071% ( 90) 00:25:21.119 15682.851 - 15797.324: 76.2324% ( 75) 00:25:21.119 15797.324 - 15911.797: 77.1457% ( 83) 00:25:21.119 15911.797 - 16026.271: 78.0920% ( 86) 00:25:21.119 16026.271 - 16140.744: 79.0713% ( 89) 00:25:21.119 16140.744 - 16255.217: 80.2707% ( 109) 00:25:21.119 16255.217 - 16369.691: 81.5141% ( 113) 00:25:21.119 16369.691 - 16484.164: 82.5704% ( 96) 00:25:21.119 16484.164 - 16598.638: 83.5827% ( 92) 00:25:21.119 16598.638 - 16713.111: 84.6501% ( 97) 00:25:21.119 16713.111 - 16827.584: 85.6844% ( 94) 00:25:21.119 16827.584 - 16942.058: 86.6527% ( 88) 00:25:21.119 16942.058 - 17056.531: 87.8741% ( 111) 00:25:21.119 17056.531 - 17171.004: 88.7544% ( 80) 00:25:21.119 17171.004 - 17285.478: 89.6237% ( 79) 00:25:21.119 17285.478 - 17399.951: 90.3609% ( 67) 00:25:21.119 17399.951 - 17514.424: 91.1092% ( 68) 00:25:21.119 17514.424 - 17628.898: 91.8464% ( 67) 00:25:21.119 17628.898 - 17743.371: 92.5836% ( 67) 00:25:21.119 17743.371 - 17857.845: 93.0568% ( 43) 00:25:21.119 17857.845 - 17972.318: 93.4089% ( 32) 00:25:21.119 17972.318 - 18086.791: 93.7390% ( 30) 00:25:21.119 18086.791 - 18201.265: 94.0251% ( 26) 00:25:21.119 18201.265 - 18315.738: 94.4212% ( 36) 00:25:21.119 18315.738 - 18430.211: 94.6853% ( 24) 00:25:21.119 18430.211 - 18544.685: 94.9714% ( 26) 00:25:21.119 18544.685 - 18659.158: 95.2135% ( 22) 00:25:21.119 18659.158 - 18773.631: 95.4335% ( 20) 00:25:21.119 18773.631 - 18888.105: 95.7196% ( 26) 00:25:21.119 18888.105 - 19002.578: 96.1378% ( 38) 00:25:21.119 19002.578 - 19117.052: 96.4239% ( 26) 00:25:21.119 19117.052 - 19231.525: 96.5229% ( 9) 00:25:21.119 19231.525 - 19345.998: 96.5999% ( 7) 00:25:21.119 19345.998 - 19460.472: 96.6989% ( 9) 00:25:21.119 19460.472 - 19574.945: 96.7760% ( 7) 00:25:21.119 19574.945 - 19689.418: 96.8860% ( 10) 00:25:21.119 19689.418 - 19803.892: 97.1281% ( 22) 00:25:21.119 19803.892 - 19918.365: 97.2601% ( 12) 00:25:21.119 19918.365 - 20032.838: 97.4802% ( 20) 00:25:21.119 20032.838 - 20147.312: 97.6012% ( 11) 00:25:21.119 20147.312 - 20261.785: 97.7553% ( 14) 00:25:21.119 20261.785 - 20376.259: 98.1514% ( 36) 00:25:21.119 20376.259 - 20490.732: 98.2945% ( 13) 00:25:21.119 20490.732 - 20605.205: 98.3605% ( 6) 00:25:21.119 20605.205 - 20719.679: 98.4045% ( 4) 00:25:21.119 20719.679 - 20834.152: 98.4155% ( 1) 00:25:21.119 20834.152 - 20948.625: 98.4265% ( 1) 00:25:21.119 20948.625 - 21063.099: 98.4375% ( 1) 00:25:21.119 21063.099 - 21177.572: 98.4485% ( 1) 00:25:21.119 21292.045 - 21406.519: 98.4815% ( 3) 00:25:21.119 21406.519 - 21520.992: 98.5145% ( 3) 00:25:21.119 21520.992 - 21635.466: 98.5475% ( 3) 00:25:21.119 21635.466 - 21749.939: 98.5585% ( 1) 00:25:21.119 21749.939 - 21864.412: 98.5915% ( 3) 00:25:21.119 39149.890 - 39378.837: 98.6026% ( 1) 00:25:21.119 39378.837 - 39607.783: 98.6576% ( 5) 00:25:21.119 39607.783 - 39836.730: 98.7016% ( 4) 00:25:21.119 39836.730 - 40065.677: 98.7456% ( 4) 00:25:21.119 40065.677 - 40294.624: 98.8006% ( 5) 00:25:21.119 40294.624 - 40523.570: 98.8556% ( 5) 00:25:21.119 40523.570 - 40752.517: 98.9107% ( 5) 00:25:21.119 40752.517 - 40981.464: 98.9657% ( 5) 00:25:21.119 40981.464 - 41210.410: 99.0097% ( 4) 00:25:21.119 41210.410 - 41439.357: 99.0647% ( 5) 00:25:21.119 41439.357 - 41668.304: 99.1307% ( 6) 00:25:21.119 41668.304 - 41897.251: 99.1857% ( 5) 00:25:21.119 41897.251 - 42126.197: 99.2298% ( 4) 00:25:21.119 42126.197 - 42355.144: 99.2738% ( 4) 00:25:21.119 42355.144 - 42584.091: 99.2958% ( 2) 00:25:21.119 50826.173 - 51055.120: 99.3068% ( 1) 00:25:21.119 51055.120 - 51284.066: 99.3398% ( 3) 00:25:21.119 51284.066 - 51513.013: 99.3948% ( 5) 00:25:21.119 51513.013 - 51741.960: 99.4388% ( 4) 00:25:21.119 51741.960 - 51970.907: 99.5048% ( 6) 00:25:21.119 51970.907 - 52199.853: 99.5489% ( 4) 00:25:21.119 52199.853 - 52428.800: 99.6039% ( 5) 00:25:21.119 52428.800 - 52657.747: 99.6479% ( 4) 00:25:21.119 52657.747 - 52886.693: 99.7029% ( 5) 00:25:21.119 52886.693 - 53115.640: 99.7469% ( 4) 00:25:21.119 53115.640 - 53344.587: 99.8129% ( 6) 00:25:21.120 53344.587 - 53573.534: 99.8570% ( 4) 00:25:21.120 53573.534 - 53802.480: 99.9010% ( 4) 00:25:21.120 53802.480 - 54031.427: 99.9560% ( 5) 00:25:21.120 54031.427 - 54260.374: 99.9890% ( 3) 00:25:21.120 54260.374 - 54489.321: 100.0000% ( 1) 00:25:21.120 00:25:21.120 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:25:21.120 ============================================================================== 00:25:21.120 Range in us Cumulative IO count 00:25:21.120 9100.632 - 9157.869: 0.0110% ( 1) 00:25:21.120 9444.052 - 9501.289: 0.0330% ( 2) 00:25:21.120 9501.289 - 9558.526: 0.0990% ( 6) 00:25:21.120 9558.526 - 9615.762: 0.2421% ( 13) 00:25:21.120 9615.762 - 9672.999: 0.5282% ( 26) 00:25:21.120 9672.999 - 9730.236: 0.8253% ( 27) 00:25:21.120 9730.236 - 9787.472: 1.0563% ( 21) 00:25:21.120 9787.472 - 9844.709: 1.3204% ( 24) 00:25:21.120 9844.709 - 9901.946: 1.5405% ( 20) 00:25:21.120 9901.946 - 9959.183: 1.7826% ( 22) 00:25:21.120 9959.183 - 10016.419: 1.9916% ( 19) 00:25:21.120 10016.419 - 10073.656: 2.2447% ( 23) 00:25:21.120 10073.656 - 10130.893: 2.5528% ( 28) 00:25:21.120 10130.893 - 10188.129: 3.0040% ( 41) 00:25:21.120 10188.129 - 10245.366: 3.6312% ( 57) 00:25:21.120 10245.366 - 10302.603: 3.9503% ( 29) 00:25:21.120 10302.603 - 10359.839: 4.2033% ( 23) 00:25:21.120 10359.839 - 10417.076: 4.4234% ( 20) 00:25:21.120 10417.076 - 10474.313: 4.8966% ( 43) 00:25:21.120 10474.313 - 10531.549: 5.2267% ( 30) 00:25:21.120 10531.549 - 10588.786: 5.7658% ( 49) 00:25:21.120 10588.786 - 10646.023: 6.5471% ( 71) 00:25:21.120 10646.023 - 10703.259: 7.1963% ( 59) 00:25:21.120 10703.259 - 10760.496: 8.0766% ( 80) 00:25:21.120 10760.496 - 10817.733: 9.0669% ( 90) 00:25:21.120 10817.733 - 10874.969: 10.1783% ( 101) 00:25:21.120 10874.969 - 10932.206: 11.3006% ( 102) 00:25:21.120 10932.206 - 10989.443: 12.6871% ( 126) 00:25:21.120 10989.443 - 11046.679: 13.9965% ( 119) 00:25:21.120 11046.679 - 11103.916: 15.2839% ( 117) 00:25:21.120 11103.916 - 11161.153: 16.0871% ( 73) 00:25:21.120 11161.153 - 11218.390: 16.9784% ( 81) 00:25:21.120 11218.390 - 11275.626: 17.5726% ( 54) 00:25:21.120 11275.626 - 11332.863: 18.1228% ( 50) 00:25:21.120 11332.863 - 11390.100: 18.7610% ( 58) 00:25:21.120 11390.100 - 11447.336: 19.5753% ( 74) 00:25:21.120 11447.336 - 11504.573: 20.3235% ( 68) 00:25:21.120 11504.573 - 11561.810: 20.9947% ( 61) 00:25:21.120 11561.810 - 11619.046: 21.6549% ( 60) 00:25:21.120 11619.046 - 11676.283: 22.2711% ( 56) 00:25:21.120 11676.283 - 11733.520: 23.2064% ( 85) 00:25:21.120 11733.520 - 11790.756: 24.0207% ( 74) 00:25:21.120 11790.756 - 11847.993: 24.8129% ( 72) 00:25:21.120 11847.993 - 11905.230: 25.7923% ( 89) 00:25:21.120 11905.230 - 11962.466: 26.8266% ( 94) 00:25:21.120 11962.466 - 12019.703: 27.8279% ( 91) 00:25:21.120 12019.703 - 12076.940: 28.7742% ( 86) 00:25:21.120 12076.940 - 12134.176: 29.8415% ( 97) 00:25:21.120 12134.176 - 12191.413: 30.6118% ( 70) 00:25:21.120 12191.413 - 12248.650: 31.2830% ( 61) 00:25:21.120 12248.650 - 12305.886: 31.9102% ( 57) 00:25:21.120 12305.886 - 12363.123: 32.9225% ( 92) 00:25:21.120 12363.123 - 12420.360: 33.9459% ( 93) 00:25:21.120 12420.360 - 12477.597: 34.7931% ( 77) 00:25:21.120 12477.597 - 12534.833: 35.9375% ( 104) 00:25:21.120 12534.833 - 12592.070: 36.6307% ( 63) 00:25:21.120 12592.070 - 12649.307: 37.3460% ( 65) 00:25:21.120 12649.307 - 12706.543: 38.0612% ( 65) 00:25:21.120 12706.543 - 12763.780: 38.9525% ( 81) 00:25:21.120 12763.780 - 12821.017: 39.8658% ( 83) 00:25:21.120 12821.017 - 12878.253: 40.8781% ( 92) 00:25:21.120 12878.253 - 12935.490: 41.8904% ( 92) 00:25:21.120 12935.490 - 12992.727: 43.1338% ( 113) 00:25:21.120 12992.727 - 13049.963: 44.7733% ( 149) 00:25:21.120 13049.963 - 13107.200: 46.2148% ( 131) 00:25:21.120 13107.200 - 13164.437: 47.4582% ( 113) 00:25:21.120 13164.437 - 13221.673: 49.0977% ( 149) 00:25:21.120 13221.673 - 13278.910: 50.6162% ( 138) 00:25:21.120 13278.910 - 13336.147: 51.8156% ( 109) 00:25:21.120 13336.147 - 13393.383: 52.9379% ( 102) 00:25:21.120 13393.383 - 13450.620: 54.0163% ( 98) 00:25:21.120 13450.620 - 13507.857: 54.9076% ( 81) 00:25:21.120 13507.857 - 13565.093: 55.7218% ( 74) 00:25:21.120 13565.093 - 13622.330: 56.3820% ( 60) 00:25:21.120 13622.330 - 13679.567: 57.0092% ( 57) 00:25:21.120 13679.567 - 13736.803: 57.7685% ( 69) 00:25:21.120 13736.803 - 13794.040: 58.5497% ( 71) 00:25:21.120 13794.040 - 13851.277: 59.5070% ( 87) 00:25:21.120 13851.277 - 13908.514: 60.2553% ( 68) 00:25:21.120 13908.514 - 13965.750: 60.9925% ( 67) 00:25:21.120 13965.750 - 14022.987: 61.4217% ( 39) 00:25:21.120 14022.987 - 14080.224: 61.7958% ( 34) 00:25:21.120 14080.224 - 14137.460: 62.2469% ( 41) 00:25:21.120 14137.460 - 14194.697: 62.6871% ( 40) 00:25:21.120 14194.697 - 14251.934: 63.3033% ( 56) 00:25:21.120 14251.934 - 14309.170: 63.8094% ( 46) 00:25:21.120 14309.170 - 14366.407: 64.2716% ( 42) 00:25:21.120 14366.407 - 14423.644: 64.6677% ( 36) 00:25:21.120 14423.644 - 14480.880: 65.1298% ( 42) 00:25:21.120 14480.880 - 14538.117: 65.6250% ( 45) 00:25:21.120 14538.117 - 14595.354: 66.2852% ( 60) 00:25:21.120 14595.354 - 14652.590: 66.7143% ( 39) 00:25:21.120 14652.590 - 14767.064: 67.6386% ( 84) 00:25:21.120 14767.064 - 14881.537: 68.4089% ( 70) 00:25:21.120 14881.537 - 14996.010: 69.1461% ( 67) 00:25:21.120 14996.010 - 15110.484: 70.0484% ( 82) 00:25:21.120 15110.484 - 15224.957: 70.7526% ( 64) 00:25:21.120 15224.957 - 15339.431: 71.4349% ( 62) 00:25:21.120 15339.431 - 15453.904: 72.5572% ( 102) 00:25:21.120 15453.904 - 15568.377: 73.3935% ( 76) 00:25:21.120 15568.377 - 15682.851: 74.4278% ( 94) 00:25:21.120 15682.851 - 15797.324: 75.6272% ( 109) 00:25:21.120 15797.324 - 15911.797: 77.1237% ( 136) 00:25:21.120 15911.797 - 16026.271: 78.5321% ( 128) 00:25:21.120 16026.271 - 16140.744: 79.6875% ( 105) 00:25:21.120 16140.744 - 16255.217: 80.7328% ( 95) 00:25:21.120 16255.217 - 16369.691: 82.0973% ( 124) 00:25:21.120 16369.691 - 16484.164: 82.9005% ( 73) 00:25:21.120 16484.164 - 16598.638: 83.8688% ( 88) 00:25:21.120 16598.638 - 16713.111: 84.8482% ( 89) 00:25:21.120 16713.111 - 16827.584: 85.6844% ( 76) 00:25:21.120 16827.584 - 16942.058: 86.3556% ( 61) 00:25:21.120 16942.058 - 17056.531: 87.0048% ( 59) 00:25:21.120 17056.531 - 17171.004: 87.7531% ( 68) 00:25:21.120 17171.004 - 17285.478: 88.5123% ( 69) 00:25:21.120 17285.478 - 17399.951: 89.1615% ( 59) 00:25:21.120 17399.951 - 17514.424: 89.8548% ( 63) 00:25:21.120 17514.424 - 17628.898: 90.4049% ( 50) 00:25:21.120 17628.898 - 17743.371: 91.3292% ( 84) 00:25:21.120 17743.371 - 17857.845: 92.3856% ( 96) 00:25:21.120 17857.845 - 17972.318: 93.1888% ( 73) 00:25:21.120 17972.318 - 18086.791: 93.9151% ( 66) 00:25:21.120 18086.791 - 18201.265: 94.4212% ( 46) 00:25:21.120 18201.265 - 18315.738: 94.8173% ( 36) 00:25:21.120 18315.738 - 18430.211: 95.1695% ( 32) 00:25:21.120 18430.211 - 18544.685: 95.4665% ( 27) 00:25:21.120 18544.685 - 18659.158: 95.8187% ( 32) 00:25:21.120 18659.158 - 18773.631: 96.0497% ( 21) 00:25:21.120 18773.631 - 18888.105: 96.2148% ( 15) 00:25:21.120 18888.105 - 19002.578: 96.3468% ( 12) 00:25:21.120 19002.578 - 19117.052: 96.4459% ( 9) 00:25:21.120 19117.052 - 19231.525: 96.5449% ( 9) 00:25:21.120 19231.525 - 19345.998: 96.5999% ( 5) 00:25:21.120 19345.998 - 19460.472: 96.6549% ( 5) 00:25:21.120 19460.472 - 19574.945: 96.8970% ( 22) 00:25:21.120 19574.945 - 19689.418: 97.1391% ( 22) 00:25:21.120 19689.418 - 19803.892: 97.4912% ( 32) 00:25:21.120 19803.892 - 19918.365: 97.9093% ( 38) 00:25:21.120 19918.365 - 20032.838: 98.1074% ( 18) 00:25:21.120 20032.838 - 20147.312: 98.3605% ( 23) 00:25:21.120 20147.312 - 20261.785: 98.4705% ( 10) 00:25:21.120 20261.785 - 20376.259: 98.5585% ( 8) 00:25:21.120 20376.259 - 20490.732: 98.5805% ( 2) 00:25:21.120 20490.732 - 20605.205: 98.5915% ( 1) 00:25:21.120 38234.103 - 38463.050: 98.6356% ( 4) 00:25:21.120 38463.050 - 38691.997: 98.7016% ( 6) 00:25:21.120 38691.997 - 38920.943: 98.7566% ( 5) 00:25:21.120 38920.943 - 39149.890: 98.8116% ( 5) 00:25:21.120 39149.890 - 39378.837: 98.8776% ( 6) 00:25:21.120 39378.837 - 39607.783: 98.9217% ( 4) 00:25:21.120 39607.783 - 39836.730: 98.9767% ( 5) 00:25:21.120 39836.730 - 40065.677: 99.0317% ( 5) 00:25:21.120 40065.677 - 40294.624: 99.0867% ( 5) 00:25:21.120 40294.624 - 40523.570: 99.1417% ( 5) 00:25:21.120 40523.570 - 40752.517: 99.1857% ( 4) 00:25:21.120 40752.517 - 40981.464: 99.2518% ( 6) 00:25:21.120 40981.464 - 41210.410: 99.2958% ( 4) 00:25:21.121 48765.652 - 48994.599: 99.3288% ( 3) 00:25:21.121 48994.599 - 49223.546: 99.3838% ( 5) 00:25:21.121 49223.546 - 49452.493: 99.4388% ( 5) 00:25:21.121 49452.493 - 49681.439: 99.4828% ( 4) 00:25:21.121 49681.439 - 49910.386: 99.5489% ( 6) 00:25:21.121 49910.386 - 50139.333: 99.6039% ( 5) 00:25:21.121 50139.333 - 50368.279: 99.6589% ( 5) 00:25:21.121 50368.279 - 50597.226: 99.7139% ( 5) 00:25:21.121 50597.226 - 50826.173: 99.7689% ( 5) 00:25:21.121 50826.173 - 51055.120: 99.8239% ( 5) 00:25:21.121 51055.120 - 51284.066: 99.8790% ( 5) 00:25:21.121 51284.066 - 51513.013: 99.9340% ( 5) 00:25:21.121 51513.013 - 51741.960: 99.9890% ( 5) 00:25:21.121 51741.960 - 51970.907: 100.0000% ( 1) 00:25:21.121 00:25:21.121 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:25:21.121 ============================================================================== 00:25:21.121 Range in us Cumulative IO count 00:25:21.121 9100.632 - 9157.869: 0.0110% ( 1) 00:25:21.121 9215.106 - 9272.342: 0.0330% ( 2) 00:25:21.121 9272.342 - 9329.579: 0.1430% ( 10) 00:25:21.121 9329.579 - 9386.816: 0.2641% ( 11) 00:25:21.121 9386.816 - 9444.052: 0.3961% ( 12) 00:25:21.121 9444.052 - 9501.289: 0.6712% ( 25) 00:25:21.121 9501.289 - 9558.526: 1.0013% ( 30) 00:25:21.121 9558.526 - 9615.762: 1.2434% ( 22) 00:25:21.121 9615.762 - 9672.999: 1.4635% ( 20) 00:25:21.121 9672.999 - 9730.236: 1.7386% ( 25) 00:25:21.121 9730.236 - 9787.472: 1.9146% ( 16) 00:25:21.121 9787.472 - 9844.709: 2.1017% ( 17) 00:25:21.121 9844.709 - 9901.946: 2.3438% ( 22) 00:25:21.121 9901.946 - 9959.183: 2.5088% ( 15) 00:25:21.121 9959.183 - 10016.419: 2.7179% ( 19) 00:25:21.121 10016.419 - 10073.656: 3.0260% ( 28) 00:25:21.121 10073.656 - 10130.893: 3.2790% ( 23) 00:25:21.121 10130.893 - 10188.129: 3.6752% ( 36) 00:25:21.121 10188.129 - 10245.366: 3.9833% ( 28) 00:25:21.121 10245.366 - 10302.603: 4.3134% ( 30) 00:25:21.121 10302.603 - 10359.839: 4.5114% ( 18) 00:25:21.121 10359.839 - 10417.076: 4.6765% ( 15) 00:25:21.121 10417.076 - 10474.313: 4.8636% ( 17) 00:25:21.121 10474.313 - 10531.549: 5.1276% ( 24) 00:25:21.121 10531.549 - 10588.786: 5.4577% ( 30) 00:25:21.121 10588.786 - 10646.023: 5.8539% ( 36) 00:25:21.121 10646.023 - 10703.259: 6.6021% ( 68) 00:25:21.121 10703.259 - 10760.496: 7.2623% ( 60) 00:25:21.121 10760.496 - 10817.733: 8.0436% ( 71) 00:25:21.121 10817.733 - 10874.969: 8.8688% ( 75) 00:25:21.121 10874.969 - 10932.206: 10.1893% ( 120) 00:25:21.121 10932.206 - 10989.443: 11.0255% ( 76) 00:25:21.121 10989.443 - 11046.679: 12.1369% ( 101) 00:25:21.121 11046.679 - 11103.916: 13.2592% ( 102) 00:25:21.121 11103.916 - 11161.153: 14.3266% ( 97) 00:25:21.121 11161.153 - 11218.390: 15.2949% ( 88) 00:25:21.121 11218.390 - 11275.626: 16.2082% ( 83) 00:25:21.121 11275.626 - 11332.863: 17.3526% ( 104) 00:25:21.121 11332.863 - 11390.100: 18.2658% ( 83) 00:25:21.121 11390.100 - 11447.336: 19.3002% ( 94) 00:25:21.121 11447.336 - 11504.573: 20.2575% ( 87) 00:25:21.121 11504.573 - 11561.810: 21.2038% ( 86) 00:25:21.121 11561.810 - 11619.046: 21.7540% ( 50) 00:25:21.121 11619.046 - 11676.283: 22.3592% ( 55) 00:25:21.121 11676.283 - 11733.520: 23.1074% ( 68) 00:25:21.121 11733.520 - 11790.756: 23.9657% ( 78) 00:25:21.121 11790.756 - 11847.993: 24.6919% ( 66) 00:25:21.121 11847.993 - 11905.230: 25.4511% ( 69) 00:25:21.121 11905.230 - 11962.466: 26.3534% ( 82) 00:25:21.121 11962.466 - 12019.703: 27.3768% ( 93) 00:25:21.121 12019.703 - 12076.940: 28.7522% ( 125) 00:25:21.121 12076.940 - 12134.176: 30.3037% ( 141) 00:25:21.121 12134.176 - 12191.413: 31.5801% ( 116) 00:25:21.121 12191.413 - 12248.650: 32.6805% ( 100) 00:25:21.121 12248.650 - 12305.886: 33.8578% ( 107) 00:25:21.121 12305.886 - 12363.123: 34.6941% ( 76) 00:25:21.121 12363.123 - 12420.360: 35.4423% ( 68) 00:25:21.121 12420.360 - 12477.597: 36.2126% ( 70) 00:25:21.121 12477.597 - 12534.833: 36.9388% ( 66) 00:25:21.121 12534.833 - 12592.070: 37.9511% ( 92) 00:25:21.121 12592.070 - 12649.307: 38.8094% ( 78) 00:25:21.121 12649.307 - 12706.543: 39.5687% ( 69) 00:25:21.121 12706.543 - 12763.780: 40.4710% ( 82) 00:25:21.121 12763.780 - 12821.017: 41.3402% ( 79) 00:25:21.121 12821.017 - 12878.253: 42.1655% ( 75) 00:25:21.121 12878.253 - 12935.490: 43.1668% ( 91) 00:25:21.121 12935.490 - 12992.727: 44.0911% ( 84) 00:25:21.121 12992.727 - 13049.963: 45.2465% ( 105) 00:25:21.121 13049.963 - 13107.200: 46.3248% ( 98) 00:25:21.121 13107.200 - 13164.437: 47.4912% ( 106) 00:25:21.121 13164.437 - 13221.673: 48.6136% ( 102) 00:25:21.121 13221.673 - 13278.910: 49.6039% ( 90) 00:25:21.121 13278.910 - 13336.147: 50.4732% ( 79) 00:25:21.121 13336.147 - 13393.383: 51.2984% ( 75) 00:25:21.121 13393.383 - 13450.620: 52.0467% ( 68) 00:25:21.121 13450.620 - 13507.857: 52.9159% ( 79) 00:25:21.121 13507.857 - 13565.093: 53.8402% ( 84) 00:25:21.121 13565.093 - 13622.330: 54.6325% ( 72) 00:25:21.121 13622.330 - 13679.567: 55.6668% ( 94) 00:25:21.121 13679.567 - 13736.803: 56.4481% ( 71) 00:25:21.121 13736.803 - 13794.040: 57.2513% ( 73) 00:25:21.121 13794.040 - 13851.277: 58.0106% ( 69) 00:25:21.121 13851.277 - 13908.514: 58.8468% ( 76) 00:25:21.121 13908.514 - 13965.750: 59.4850% ( 58) 00:25:21.121 13965.750 - 14022.987: 60.2333% ( 68) 00:25:21.121 14022.987 - 14080.224: 60.9595% ( 66) 00:25:21.121 14080.224 - 14137.460: 61.7628% ( 73) 00:25:21.121 14137.460 - 14194.697: 62.4780% ( 65) 00:25:21.121 14194.697 - 14251.934: 63.3913% ( 83) 00:25:21.121 14251.934 - 14309.170: 64.0845% ( 63) 00:25:21.121 14309.170 - 14366.407: 64.6897% ( 55) 00:25:21.121 14366.407 - 14423.644: 65.2619% ( 52) 00:25:21.121 14423.644 - 14480.880: 66.0321% ( 70) 00:25:21.121 14480.880 - 14538.117: 66.8244% ( 72) 00:25:21.121 14538.117 - 14595.354: 67.3526% ( 48) 00:25:21.121 14595.354 - 14652.590: 67.8257% ( 43) 00:25:21.121 14652.590 - 14767.064: 68.5849% ( 69) 00:25:21.121 14767.064 - 14881.537: 69.4542% ( 79) 00:25:21.121 14881.537 - 14996.010: 70.0594% ( 55) 00:25:21.121 14996.010 - 15110.484: 70.5986% ( 49) 00:25:21.121 15110.484 - 15224.957: 71.2918% ( 63) 00:25:21.121 15224.957 - 15339.431: 72.1611% ( 79) 00:25:21.121 15339.431 - 15453.904: 73.3165% ( 105) 00:25:21.121 15453.904 - 15568.377: 74.6589% ( 122) 00:25:21.121 15568.377 - 15682.851: 76.1444% ( 135) 00:25:21.121 15682.851 - 15797.324: 77.2887% ( 104) 00:25:21.121 15797.324 - 15911.797: 78.3671% ( 98) 00:25:21.121 15911.797 - 16026.271: 79.2033% ( 76) 00:25:21.121 16026.271 - 16140.744: 80.2047% ( 91) 00:25:21.121 16140.744 - 16255.217: 81.6791% ( 134) 00:25:21.121 16255.217 - 16369.691: 83.0216% ( 122) 00:25:21.121 16369.691 - 16484.164: 83.8908% ( 79) 00:25:21.121 16484.164 - 16598.638: 84.7381% ( 77) 00:25:21.121 16598.638 - 16713.111: 85.7174% ( 89) 00:25:21.121 16713.111 - 16827.584: 86.3886% ( 61) 00:25:21.121 16827.584 - 16942.058: 86.8288% ( 40) 00:25:21.121 16942.058 - 17056.531: 87.1809% ( 32) 00:25:21.121 17056.531 - 17171.004: 87.6651% ( 44) 00:25:21.121 17171.004 - 17285.478: 88.5123% ( 77) 00:25:21.121 17285.478 - 17399.951: 89.5467% ( 94) 00:25:21.121 17399.951 - 17514.424: 90.3389% ( 72) 00:25:21.122 17514.424 - 17628.898: 91.0871% ( 68) 00:25:21.122 17628.898 - 17743.371: 92.0114% ( 84) 00:25:21.122 17743.371 - 17857.845: 92.7267% ( 65) 00:25:21.122 17857.845 - 17972.318: 93.3759% ( 59) 00:25:21.122 17972.318 - 18086.791: 94.1021% ( 66) 00:25:21.122 18086.791 - 18201.265: 94.6413% ( 49) 00:25:21.122 18201.265 - 18315.738: 95.0924% ( 41) 00:25:21.122 18315.738 - 18430.211: 95.3785% ( 26) 00:25:21.122 18430.211 - 18544.685: 95.5986% ( 20) 00:25:21.122 18544.685 - 18659.158: 95.8187% ( 20) 00:25:21.122 18659.158 - 18773.631: 96.0938% ( 25) 00:25:21.122 18773.631 - 18888.105: 96.2478% ( 14) 00:25:21.122 18888.105 - 19002.578: 96.3468% ( 9) 00:25:21.122 19002.578 - 19117.052: 96.4789% ( 12) 00:25:21.122 19117.052 - 19231.525: 96.5779% ( 9) 00:25:21.122 19231.525 - 19345.998: 96.6659% ( 8) 00:25:21.122 19345.998 - 19460.472: 96.7540% ( 8) 00:25:21.122 19460.472 - 19574.945: 96.8530% ( 9) 00:25:21.122 19574.945 - 19689.418: 97.0951% ( 22) 00:25:21.122 19689.418 - 19803.892: 97.2711% ( 16) 00:25:21.122 19803.892 - 19918.365: 97.4472% ( 16) 00:25:21.122 19918.365 - 20032.838: 97.6452% ( 18) 00:25:21.122 20032.838 - 20147.312: 97.7443% ( 9) 00:25:21.122 20147.312 - 20261.785: 97.7883% ( 4) 00:25:21.122 20261.785 - 20376.259: 97.8433% ( 5) 00:25:21.122 20376.259 - 20490.732: 97.8763% ( 3) 00:25:21.122 20490.732 - 20605.205: 97.8873% ( 1) 00:25:21.122 20834.152 - 20948.625: 97.8983% ( 1) 00:25:21.122 21292.045 - 21406.519: 98.0744% ( 16) 00:25:21.122 21406.519 - 21520.992: 98.2064% ( 12) 00:25:21.122 21520.992 - 21635.466: 98.3605% ( 14) 00:25:21.122 21635.466 - 21749.939: 98.4265% ( 6) 00:25:21.122 21749.939 - 21864.412: 98.4595% ( 3) 00:25:21.122 21864.412 - 21978.886: 98.4925% ( 3) 00:25:21.122 21978.886 - 22093.359: 98.5255% ( 3) 00:25:21.122 22093.359 - 22207.832: 98.5475% ( 2) 00:25:21.122 22207.832 - 22322.306: 98.5805% ( 3) 00:25:21.122 22322.306 - 22436.779: 98.5915% ( 1) 00:25:21.122 37089.369 - 37318.316: 98.6576% ( 6) 00:25:21.122 37318.316 - 37547.263: 98.7016% ( 4) 00:25:21.122 37547.263 - 37776.210: 98.7676% ( 6) 00:25:21.122 37776.210 - 38005.156: 98.8336% ( 6) 00:25:21.122 38005.156 - 38234.103: 98.8886% ( 5) 00:25:21.122 38234.103 - 38463.050: 98.9547% ( 6) 00:25:21.122 38463.050 - 38691.997: 99.0207% ( 6) 00:25:21.122 38691.997 - 38920.943: 99.0757% ( 5) 00:25:21.122 38920.943 - 39149.890: 99.1417% ( 6) 00:25:21.122 39149.890 - 39378.837: 99.2077% ( 6) 00:25:21.122 39378.837 - 39607.783: 99.2738% ( 6) 00:25:21.122 39607.783 - 39836.730: 99.2958% ( 2) 00:25:21.122 47163.025 - 47391.972: 99.3508% ( 5) 00:25:21.122 47391.972 - 47620.919: 99.4058% ( 5) 00:25:21.122 47620.919 - 47849.866: 99.4608% ( 5) 00:25:21.122 47849.866 - 48078.812: 99.5048% ( 4) 00:25:21.122 48078.812 - 48307.759: 99.5599% ( 5) 00:25:21.122 48307.759 - 48536.706: 99.6149% ( 5) 00:25:21.122 48536.706 - 48765.652: 99.6699% ( 5) 00:25:21.122 48765.652 - 48994.599: 99.7249% ( 5) 00:25:21.122 48994.599 - 49223.546: 99.7689% ( 4) 00:25:21.122 49223.546 - 49452.493: 99.8239% ( 5) 00:25:21.122 49452.493 - 49681.439: 99.8790% ( 5) 00:25:21.122 49681.439 - 49910.386: 99.9340% ( 5) 00:25:21.122 49910.386 - 50139.333: 100.0000% ( 6) 00:25:21.122 00:25:21.122 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:25:21.122 ============================================================================== 00:25:21.122 Range in us Cumulative IO count 00:25:21.122 8986.159 - 9043.396: 0.0110% ( 1) 00:25:21.122 9043.396 - 9100.632: 0.0220% ( 1) 00:25:21.122 9100.632 - 9157.869: 0.0330% ( 1) 00:25:21.122 9157.869 - 9215.106: 0.0440% ( 1) 00:25:21.122 9215.106 - 9272.342: 0.0990% ( 5) 00:25:21.122 9272.342 - 9329.579: 0.1871% ( 8) 00:25:21.122 9329.579 - 9386.816: 0.2861% ( 9) 00:25:21.122 9386.816 - 9444.052: 0.4181% ( 12) 00:25:21.122 9444.052 - 9501.289: 0.7592% ( 31) 00:25:21.122 9501.289 - 9558.526: 0.9793% ( 20) 00:25:21.122 9558.526 - 9615.762: 1.2874% ( 28) 00:25:21.122 9615.762 - 9672.999: 1.5405% ( 23) 00:25:21.122 9672.999 - 9730.236: 1.7496% ( 19) 00:25:21.122 9730.236 - 9787.472: 1.8706% ( 11) 00:25:21.122 9787.472 - 9844.709: 2.0026% ( 12) 00:25:21.122 9844.709 - 9901.946: 2.1787% ( 16) 00:25:21.122 9901.946 - 9959.183: 2.3107% ( 12) 00:25:21.122 9959.183 - 10016.419: 2.4098% ( 9) 00:25:21.122 10016.419 - 10073.656: 2.6629% ( 23) 00:25:21.122 10073.656 - 10130.893: 2.8059% ( 13) 00:25:21.122 10130.893 - 10188.129: 3.0370% ( 21) 00:25:21.122 10188.129 - 10245.366: 3.4001% ( 33) 00:25:21.122 10245.366 - 10302.603: 3.6972% ( 27) 00:25:21.122 10302.603 - 10359.839: 4.1263% ( 39) 00:25:21.122 10359.839 - 10417.076: 4.4344% ( 28) 00:25:21.122 10417.076 - 10474.313: 4.8195% ( 35) 00:25:21.122 10474.313 - 10531.549: 5.2487% ( 39) 00:25:21.122 10531.549 - 10588.786: 5.8319% ( 53) 00:25:21.122 10588.786 - 10646.023: 6.4040% ( 52) 00:25:21.122 10646.023 - 10703.259: 7.0973% ( 63) 00:25:21.122 10703.259 - 10760.496: 7.8675% ( 70) 00:25:21.122 10760.496 - 10817.733: 8.7698% ( 82) 00:25:21.122 10817.733 - 10874.969: 9.6941% ( 84) 00:25:21.122 10874.969 - 10932.206: 10.8495% ( 105) 00:25:21.122 10932.206 - 10989.443: 11.8728% ( 93) 00:25:21.122 10989.443 - 11046.679: 12.7971% ( 84) 00:25:21.122 11046.679 - 11103.916: 13.7764% ( 89) 00:25:21.122 11103.916 - 11161.153: 14.5357% ( 69) 00:25:21.122 11161.153 - 11218.390: 15.2949% ( 69) 00:25:21.122 11218.390 - 11275.626: 16.1202% ( 75) 00:25:21.122 11275.626 - 11332.863: 17.0665% ( 86) 00:25:21.122 11332.863 - 11390.100: 18.0898% ( 93) 00:25:21.122 11390.100 - 11447.336: 19.1131% ( 93) 00:25:21.122 11447.336 - 11504.573: 20.3345% ( 111) 00:25:21.122 11504.573 - 11561.810: 21.2698% ( 85) 00:25:21.122 11561.810 - 11619.046: 22.2601% ( 90) 00:25:21.122 11619.046 - 11676.283: 23.0964% ( 76) 00:25:21.122 11676.283 - 11733.520: 24.1967% ( 100) 00:25:21.122 11733.520 - 11790.756: 25.2421% ( 95) 00:25:21.122 11790.756 - 11847.993: 26.1774% ( 85) 00:25:21.122 11847.993 - 11905.230: 27.2557% ( 98) 00:25:21.122 11905.230 - 11962.466: 28.2790% ( 93) 00:25:21.122 11962.466 - 12019.703: 29.2474% ( 88) 00:25:21.122 12019.703 - 12076.940: 30.0726% ( 75) 00:25:21.122 12076.940 - 12134.176: 30.8649% ( 72) 00:25:21.122 12134.176 - 12191.413: 31.7672% ( 82) 00:25:21.122 12191.413 - 12248.650: 32.7135% ( 86) 00:25:21.122 12248.650 - 12305.886: 33.6268% ( 83) 00:25:21.122 12305.886 - 12363.123: 34.4190% ( 72) 00:25:21.122 12363.123 - 12420.360: 35.2333% ( 74) 00:25:21.122 12420.360 - 12477.597: 36.3446% ( 101) 00:25:21.122 12477.597 - 12534.833: 37.3019% ( 87) 00:25:21.122 12534.833 - 12592.070: 38.1602% ( 78) 00:25:21.122 12592.070 - 12649.307: 39.2826% ( 102) 00:25:21.122 12649.307 - 12706.543: 40.3719% ( 99) 00:25:21.122 12706.543 - 12763.780: 41.5603% ( 108) 00:25:21.122 12763.780 - 12821.017: 42.7597% ( 109) 00:25:21.122 12821.017 - 12878.253: 43.5849% ( 75) 00:25:21.122 12878.253 - 12935.490: 44.6523% ( 97) 00:25:21.122 12935.490 - 12992.727: 45.4115% ( 69) 00:25:21.122 12992.727 - 13049.963: 46.0827% ( 61) 00:25:21.122 13049.963 - 13107.200: 46.9630% ( 80) 00:25:21.122 13107.200 - 13164.437: 47.9313% ( 88) 00:25:21.122 13164.437 - 13221.673: 48.7566% ( 75) 00:25:21.122 13221.673 - 13278.910: 49.6369% ( 80) 00:25:21.122 13278.910 - 13336.147: 50.7262% ( 99) 00:25:21.122 13336.147 - 13393.383: 52.0357% ( 119) 00:25:21.122 13393.383 - 13450.620: 53.4991% ( 133) 00:25:21.122 13450.620 - 13507.857: 54.6105% ( 101) 00:25:21.122 13507.857 - 13565.093: 55.5238% ( 83) 00:25:21.122 13565.093 - 13622.330: 56.2610% ( 67) 00:25:21.122 13622.330 - 13679.567: 57.0643% ( 73) 00:25:21.123 13679.567 - 13736.803: 57.8565% ( 72) 00:25:21.123 13736.803 - 13794.040: 58.5717% ( 65) 00:25:21.123 13794.040 - 13851.277: 59.2210% ( 59) 00:25:21.123 13851.277 - 13908.514: 59.8482% ( 57) 00:25:21.123 13908.514 - 13965.750: 60.3213% ( 43) 00:25:21.123 13965.750 - 14022.987: 60.7504% ( 39) 00:25:21.123 14022.987 - 14080.224: 61.2126% ( 42) 00:25:21.123 14080.224 - 14137.460: 61.6307% ( 38) 00:25:21.123 14137.460 - 14194.697: 62.1039% ( 43) 00:25:21.123 14194.697 - 14251.934: 62.5990% ( 45) 00:25:21.123 14251.934 - 14309.170: 63.1382% ( 49) 00:25:21.123 14309.170 - 14366.407: 63.5233% ( 35) 00:25:21.123 14366.407 - 14423.644: 63.9415% ( 38) 00:25:21.123 14423.644 - 14480.880: 64.4696% ( 48) 00:25:21.123 14480.880 - 14538.117: 64.9758% ( 46) 00:25:21.123 14538.117 - 14595.354: 65.4820% ( 46) 00:25:21.123 14595.354 - 14652.590: 66.0431% ( 51) 00:25:21.123 14652.590 - 14767.064: 67.2755% ( 112) 00:25:21.123 14767.064 - 14881.537: 68.3869% ( 101) 00:25:21.123 14881.537 - 14996.010: 69.5973% ( 110) 00:25:21.123 14996.010 - 15110.484: 71.0387% ( 131) 00:25:21.123 15110.484 - 15224.957: 71.9960% ( 87) 00:25:21.123 15224.957 - 15339.431: 73.1074% ( 101) 00:25:21.123 15339.431 - 15453.904: 74.1527% ( 95) 00:25:21.123 15453.904 - 15568.377: 75.3301% ( 107) 00:25:21.123 15568.377 - 15682.851: 76.6615% ( 121) 00:25:21.123 15682.851 - 15797.324: 77.8169% ( 105) 00:25:21.123 15797.324 - 15911.797: 78.6642% ( 77) 00:25:21.123 15911.797 - 16026.271: 79.6325% ( 88) 00:25:21.123 16026.271 - 16140.744: 80.6008% ( 88) 00:25:21.123 16140.744 - 16255.217: 81.3270% ( 66) 00:25:21.123 16255.217 - 16369.691: 82.2183% ( 81) 00:25:21.123 16369.691 - 16484.164: 83.2086% ( 90) 00:25:21.123 16484.164 - 16598.638: 84.1219% ( 83) 00:25:21.123 16598.638 - 16713.111: 84.8261% ( 64) 00:25:21.123 16713.111 - 16827.584: 85.4533% ( 57) 00:25:21.123 16827.584 - 16942.058: 86.2016% ( 68) 00:25:21.123 16942.058 - 17056.531: 87.0379% ( 76) 00:25:21.123 17056.531 - 17171.004: 87.9401% ( 82) 00:25:21.123 17171.004 - 17285.478: 88.4573% ( 47) 00:25:21.123 17285.478 - 17399.951: 89.0735% ( 56) 00:25:21.123 17399.951 - 17514.424: 89.6347% ( 51) 00:25:21.123 17514.424 - 17628.898: 90.2179% ( 53) 00:25:21.123 17628.898 - 17743.371: 91.1752% ( 87) 00:25:21.123 17743.371 - 17857.845: 92.1325% ( 87) 00:25:21.123 17857.845 - 17972.318: 92.7267% ( 54) 00:25:21.123 17972.318 - 18086.791: 93.2879% ( 51) 00:25:21.123 18086.791 - 18201.265: 93.8270% ( 49) 00:25:21.123 18201.265 - 18315.738: 94.5202% ( 63) 00:25:21.123 18315.738 - 18430.211: 94.8834% ( 33) 00:25:21.123 18430.211 - 18544.685: 95.1144% ( 21) 00:25:21.123 18544.685 - 18659.158: 95.3345% ( 20) 00:25:21.123 18659.158 - 18773.631: 95.4996% ( 15) 00:25:21.123 18773.631 - 18888.105: 95.6206% ( 11) 00:25:21.123 18888.105 - 19002.578: 95.7306% ( 10) 00:25:21.123 19002.578 - 19117.052: 95.8847% ( 14) 00:25:21.123 19117.052 - 19231.525: 96.2038% ( 29) 00:25:21.123 19231.525 - 19345.998: 96.4679% ( 24) 00:25:21.123 19345.998 - 19460.472: 96.5669% ( 9) 00:25:21.123 19460.472 - 19574.945: 96.6659% ( 9) 00:25:21.123 19574.945 - 19689.418: 96.7430% ( 7) 00:25:21.123 19689.418 - 19803.892: 96.8640% ( 11) 00:25:21.123 19803.892 - 19918.365: 97.0841% ( 20) 00:25:21.123 19918.365 - 20032.838: 97.1391% ( 5) 00:25:21.123 20032.838 - 20147.312: 97.1831% ( 4) 00:25:21.123 20147.312 - 20261.785: 97.2491% ( 6) 00:25:21.123 20261.785 - 20376.259: 97.3261% ( 7) 00:25:21.123 20376.259 - 20490.732: 97.4362% ( 10) 00:25:21.123 20490.732 - 20605.205: 97.6122% ( 16) 00:25:21.123 20605.205 - 20719.679: 97.7113% ( 9) 00:25:21.123 20719.679 - 20834.152: 97.7663% ( 5) 00:25:21.123 20834.152 - 20948.625: 97.9533% ( 17) 00:25:21.123 20948.625 - 21063.099: 98.2284% ( 25) 00:25:21.123 21063.099 - 21177.572: 98.4375% ( 19) 00:25:21.123 21177.572 - 21292.045: 98.4705% ( 3) 00:25:21.123 21292.045 - 21406.519: 98.4925% ( 2) 00:25:21.123 21406.519 - 21520.992: 98.5255% ( 3) 00:25:21.123 21520.992 - 21635.466: 98.5585% ( 3) 00:25:21.123 21635.466 - 21749.939: 98.5915% ( 3) 00:25:21.123 34342.009 - 34570.955: 98.6246% ( 3) 00:25:21.123 34570.955 - 34799.902: 98.6686% ( 4) 00:25:21.123 34799.902 - 35028.849: 98.7236% ( 5) 00:25:21.123 35028.849 - 35257.796: 98.7786% ( 5) 00:25:21.123 35257.796 - 35486.742: 98.8446% ( 6) 00:25:21.123 35486.742 - 35715.689: 98.8886% ( 4) 00:25:21.123 35715.689 - 35944.636: 98.9437% ( 5) 00:25:21.123 35944.636 - 36173.583: 98.9987% ( 5) 00:25:21.123 36173.583 - 36402.529: 99.0537% ( 5) 00:25:21.123 36402.529 - 36631.476: 99.1087% ( 5) 00:25:21.123 36631.476 - 36860.423: 99.1637% ( 5) 00:25:21.123 36860.423 - 37089.369: 99.2188% ( 5) 00:25:21.123 37089.369 - 37318.316: 99.2738% ( 5) 00:25:21.123 37318.316 - 37547.263: 99.2958% ( 2) 00:25:21.123 44644.611 - 44873.558: 99.3068% ( 1) 00:25:21.123 44873.558 - 45102.505: 99.3508% ( 4) 00:25:21.123 45102.505 - 45331.452: 99.4058% ( 5) 00:25:21.123 45331.452 - 45560.398: 99.4608% ( 5) 00:25:21.123 45560.398 - 45789.345: 99.5158% ( 5) 00:25:21.123 45789.345 - 46018.292: 99.5709% ( 5) 00:25:21.123 46018.292 - 46247.238: 99.6259% ( 5) 00:25:21.123 46247.238 - 46476.185: 99.6699% ( 4) 00:25:21.123 46476.185 - 46705.132: 99.7249% ( 5) 00:25:21.123 46705.132 - 46934.079: 99.7799% ( 5) 00:25:21.123 46934.079 - 47163.025: 99.8349% ( 5) 00:25:21.123 47163.025 - 47391.972: 99.8790% ( 4) 00:25:21.123 47391.972 - 47620.919: 99.9340% ( 5) 00:25:21.123 47620.919 - 47849.866: 99.9780% ( 4) 00:25:21.123 47849.866 - 48078.812: 100.0000% ( 2) 00:25:21.123 00:25:21.123 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:25:21.123 ============================================================================== 00:25:21.123 Range in us Cumulative IO count 00:25:21.123 9329.579 - 9386.816: 0.0440% ( 4) 00:25:21.123 9386.816 - 9444.052: 0.1430% ( 9) 00:25:21.123 9444.052 - 9501.289: 0.2531% ( 10) 00:25:21.123 9501.289 - 9558.526: 0.3851% ( 12) 00:25:21.123 9558.526 - 9615.762: 0.6382% ( 23) 00:25:21.123 9615.762 - 9672.999: 0.7812% ( 13) 00:25:21.123 9672.999 - 9730.236: 1.0233% ( 22) 00:25:21.123 9730.236 - 9787.472: 1.1664% ( 13) 00:25:21.123 9787.472 - 9844.709: 1.2874% ( 11) 00:25:21.123 9844.709 - 9901.946: 1.3754% ( 8) 00:25:21.123 9901.946 - 9959.183: 1.5515% ( 16) 00:25:21.123 9959.183 - 10016.419: 1.6945% ( 13) 00:25:21.123 10016.419 - 10073.656: 1.9586% ( 24) 00:25:21.123 10073.656 - 10130.893: 2.2997% ( 31) 00:25:21.123 10130.893 - 10188.129: 2.7399% ( 40) 00:25:21.123 10188.129 - 10245.366: 3.2240% ( 44) 00:25:21.123 10245.366 - 10302.603: 3.7302% ( 46) 00:25:21.123 10302.603 - 10359.839: 4.1483% ( 38) 00:25:21.123 10359.839 - 10417.076: 4.6325% ( 44) 00:25:21.123 10417.076 - 10474.313: 5.2047% ( 52) 00:25:21.123 10474.313 - 10531.549: 5.7658% ( 51) 00:25:21.123 10531.549 - 10588.786: 6.7892% ( 93) 00:25:21.123 10588.786 - 10646.023: 7.7025% ( 83) 00:25:21.123 10646.023 - 10703.259: 8.7698% ( 97) 00:25:21.123 10703.259 - 10760.496: 9.5070% ( 67) 00:25:21.123 10760.496 - 10817.733: 10.0682% ( 51) 00:25:21.123 10817.733 - 10874.969: 10.6844% ( 56) 00:25:21.123 10874.969 - 10932.206: 11.2786% ( 54) 00:25:21.123 10932.206 - 10989.443: 11.8068% ( 48) 00:25:21.123 10989.443 - 11046.679: 12.4010% ( 54) 00:25:21.124 11046.679 - 11103.916: 12.8961% ( 45) 00:25:21.124 11103.916 - 11161.153: 13.6224% ( 66) 00:25:21.124 11161.153 - 11218.390: 14.3156% ( 63) 00:25:21.124 11218.390 - 11275.626: 15.0528% ( 67) 00:25:21.124 11275.626 - 11332.863: 15.9991% ( 86) 00:25:21.124 11332.863 - 11390.100: 17.0224% ( 93) 00:25:21.124 11390.100 - 11447.336: 17.9027% ( 80) 00:25:21.124 11447.336 - 11504.573: 18.7170% ( 74) 00:25:21.124 11504.573 - 11561.810: 19.5202% ( 73) 00:25:21.124 11561.810 - 11619.046: 20.3565% ( 76) 00:25:21.124 11619.046 - 11676.283: 21.3028% ( 86) 00:25:21.124 11676.283 - 11733.520: 22.4362% ( 103) 00:25:21.124 11733.520 - 11790.756: 23.7896% ( 123) 00:25:21.124 11790.756 - 11847.993: 25.2751% ( 135) 00:25:21.124 11847.993 - 11905.230: 26.4415% ( 106) 00:25:21.124 11905.230 - 11962.466: 27.5748% ( 103) 00:25:21.124 11962.466 - 12019.703: 28.5431% ( 88) 00:25:21.124 12019.703 - 12076.940: 29.7095% ( 106) 00:25:21.124 12076.940 - 12134.176: 30.7108% ( 91) 00:25:21.124 12134.176 - 12191.413: 31.9542% ( 113) 00:25:21.124 12191.413 - 12248.650: 32.8345% ( 80) 00:25:21.124 12248.650 - 12305.886: 33.9459% ( 101) 00:25:21.124 12305.886 - 12363.123: 35.1893% ( 113) 00:25:21.124 12363.123 - 12420.360: 36.4767% ( 117) 00:25:21.124 12420.360 - 12477.597: 37.5330% ( 96) 00:25:21.124 12477.597 - 12534.833: 38.4903% ( 87) 00:25:21.124 12534.833 - 12592.070: 39.2936% ( 73) 00:25:21.124 12592.070 - 12649.307: 39.9538% ( 60) 00:25:21.124 12649.307 - 12706.543: 40.7901% ( 76) 00:25:21.124 12706.543 - 12763.780: 41.4393% ( 59) 00:25:21.124 12763.780 - 12821.017: 42.0775% ( 58) 00:25:21.124 12821.017 - 12878.253: 42.8587% ( 71) 00:25:21.124 12878.253 - 12935.490: 43.9811% ( 102) 00:25:21.124 12935.490 - 12992.727: 44.9934% ( 92) 00:25:21.124 12992.727 - 13049.963: 46.3688% ( 125) 00:25:21.124 13049.963 - 13107.200: 47.6122% ( 113) 00:25:21.124 13107.200 - 13164.437: 48.7456% ( 103) 00:25:21.124 13164.437 - 13221.673: 49.9560% ( 110) 00:25:21.124 13221.673 - 13278.910: 50.9023% ( 86) 00:25:21.124 13278.910 - 13336.147: 51.6175% ( 65) 00:25:21.124 13336.147 - 13393.383: 52.3438% ( 66) 00:25:21.124 13393.383 - 13450.620: 53.1250% ( 71) 00:25:21.124 13450.620 - 13507.857: 53.7302% ( 55) 00:25:21.124 13507.857 - 13565.093: 54.4344% ( 64) 00:25:21.124 13565.093 - 13622.330: 55.1496% ( 65) 00:25:21.124 13622.330 - 13679.567: 55.7989% ( 59) 00:25:21.124 13679.567 - 13736.803: 56.5031% ( 64) 00:25:21.124 13736.803 - 13794.040: 57.2953% ( 72) 00:25:21.124 13794.040 - 13851.277: 58.4397% ( 104) 00:25:21.124 13851.277 - 13908.514: 59.5841% ( 104) 00:25:21.124 13908.514 - 13965.750: 60.4313% ( 77) 00:25:21.124 13965.750 - 14022.987: 60.8825% ( 41) 00:25:21.124 14022.987 - 14080.224: 61.2346% ( 32) 00:25:21.124 14080.224 - 14137.460: 61.5427% ( 28) 00:25:21.124 14137.460 - 14194.697: 61.8508% ( 28) 00:25:21.124 14194.697 - 14251.934: 62.0929% ( 22) 00:25:21.124 14251.934 - 14309.170: 62.3900% ( 27) 00:25:21.124 14309.170 - 14366.407: 62.6981% ( 28) 00:25:21.124 14366.407 - 14423.644: 62.9511% ( 23) 00:25:21.124 14423.644 - 14480.880: 63.2372% ( 26) 00:25:21.124 14480.880 - 14538.117: 63.6664% ( 39) 00:25:21.124 14538.117 - 14595.354: 64.0625% ( 36) 00:25:21.124 14595.354 - 14652.590: 64.4586% ( 36) 00:25:21.124 14652.590 - 14767.064: 65.2839% ( 75) 00:25:21.124 14767.064 - 14881.537: 66.5933% ( 119) 00:25:21.124 14881.537 - 14996.010: 67.7047% ( 101) 00:25:21.124 14996.010 - 15110.484: 69.2892% ( 144) 00:25:21.124 15110.484 - 15224.957: 71.0497% ( 160) 00:25:21.124 15224.957 - 15339.431: 72.9313% ( 171) 00:25:21.124 15339.431 - 15453.904: 74.3508% ( 129) 00:25:21.124 15453.904 - 15568.377: 75.8473% ( 136) 00:25:21.124 15568.377 - 15682.851: 77.1457% ( 118) 00:25:21.124 15682.851 - 15797.324: 78.3121% ( 106) 00:25:21.124 15797.324 - 15911.797: 79.5555% ( 113) 00:25:21.124 15911.797 - 16026.271: 80.6888% ( 103) 00:25:21.124 16026.271 - 16140.744: 81.3490% ( 60) 00:25:21.124 16140.744 - 16255.217: 82.0092% ( 60) 00:25:21.124 16255.217 - 16369.691: 82.6474% ( 58) 00:25:21.124 16369.691 - 16484.164: 83.4727% ( 75) 00:25:21.124 16484.164 - 16598.638: 84.3970% ( 84) 00:25:21.124 16598.638 - 16713.111: 85.1783% ( 71) 00:25:21.124 16713.111 - 16827.584: 85.8605% ( 62) 00:25:21.124 16827.584 - 16942.058: 86.6637% ( 73) 00:25:21.124 16942.058 - 17056.531: 87.4890% ( 75) 00:25:21.124 17056.531 - 17171.004: 88.4353% ( 86) 00:25:21.124 17171.004 - 17285.478: 89.4036% ( 88) 00:25:21.124 17285.478 - 17399.951: 90.1188% ( 65) 00:25:21.124 17399.951 - 17514.424: 90.7240% ( 55) 00:25:21.124 17514.424 - 17628.898: 91.3622% ( 58) 00:25:21.124 17628.898 - 17743.371: 91.9124% ( 50) 00:25:21.124 17743.371 - 17857.845: 92.4846% ( 52) 00:25:21.124 17857.845 - 17972.318: 93.3319% ( 77) 00:25:21.124 17972.318 - 18086.791: 94.1681% ( 76) 00:25:21.124 18086.791 - 18201.265: 95.0814% ( 83) 00:25:21.124 18201.265 - 18315.738: 95.4115% ( 30) 00:25:21.124 18315.738 - 18430.211: 95.6426% ( 21) 00:25:21.124 18430.211 - 18544.685: 95.7746% ( 12) 00:25:21.124 18544.685 - 18659.158: 95.8737% ( 9) 00:25:21.124 18659.158 - 18773.631: 95.9727% ( 9) 00:25:21.124 18773.631 - 18888.105: 96.0387% ( 6) 00:25:21.124 18888.105 - 19002.578: 96.1158% ( 7) 00:25:21.124 19002.578 - 19117.052: 96.2038% ( 8) 00:25:21.124 19117.052 - 19231.525: 96.2588% ( 5) 00:25:21.124 19231.525 - 19345.998: 96.4129% ( 14) 00:25:21.124 19345.998 - 19460.472: 96.5229% ( 10) 00:25:21.124 19460.472 - 19574.945: 96.6109% ( 8) 00:25:21.124 19574.945 - 19689.418: 96.6879% ( 7) 00:25:21.124 19689.418 - 19803.892: 96.7540% ( 6) 00:25:21.124 19803.892 - 19918.365: 96.8860% ( 12) 00:25:21.124 19918.365 - 20032.838: 97.0841% ( 18) 00:25:21.124 20032.838 - 20147.312: 97.1501% ( 6) 00:25:21.124 20147.312 - 20261.785: 97.2491% ( 9) 00:25:21.124 20261.785 - 20376.259: 97.3261% ( 7) 00:25:21.124 20376.259 - 20490.732: 97.5022% ( 16) 00:25:21.124 20490.732 - 20605.205: 97.8983% ( 36) 00:25:21.124 20605.205 - 20719.679: 98.2394% ( 31) 00:25:21.124 20719.679 - 20834.152: 98.3825% ( 13) 00:25:21.124 20834.152 - 20948.625: 98.4595% ( 7) 00:25:21.124 20948.625 - 21063.099: 98.5365% ( 7) 00:25:21.124 21063.099 - 21177.572: 98.5585% ( 2) 00:25:21.124 21177.572 - 21292.045: 98.5915% ( 3) 00:25:21.124 32968.328 - 33197.275: 98.6356% ( 4) 00:25:21.124 33197.275 - 33426.222: 98.7016% ( 6) 00:25:21.124 33426.222 - 33655.169: 98.7896% ( 8) 00:25:21.124 33655.169 - 33884.115: 98.8776% ( 8) 00:25:21.124 33884.115 - 34113.062: 98.9547% ( 7) 00:25:21.124 34113.062 - 34342.009: 98.9987% ( 4) 00:25:21.124 34342.009 - 34570.955: 99.0537% ( 5) 00:25:21.124 34570.955 - 34799.902: 99.1087% ( 5) 00:25:21.124 34799.902 - 35028.849: 99.1527% ( 4) 00:25:21.124 35028.849 - 35257.796: 99.1967% ( 4) 00:25:21.124 35257.796 - 35486.742: 99.2518% ( 5) 00:25:21.124 35486.742 - 35715.689: 99.2958% ( 4) 00:25:21.124 41439.357 - 41668.304: 99.3288% ( 3) 00:25:21.124 41668.304 - 41897.251: 99.4058% ( 7) 00:25:21.124 41897.251 - 42126.197: 99.5048% ( 9) 00:25:21.124 43270.931 - 43499.878: 99.5599% ( 5) 00:25:21.124 43499.878 - 43728.824: 99.6039% ( 4) 00:25:21.124 43728.824 - 43957.771: 99.6699% ( 6) 00:25:21.124 43957.771 - 44186.718: 99.7249% ( 5) 00:25:21.124 44186.718 - 44415.665: 99.7799% ( 5) 00:25:21.124 44415.665 - 44644.611: 99.8349% ( 5) 00:25:21.124 44644.611 - 44873.558: 99.8900% ( 5) 00:25:21.124 44873.558 - 45102.505: 99.9560% ( 6) 00:25:21.124 45102.505 - 45331.452: 100.0000% ( 4) 00:25:21.124 00:25:21.124 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:25:21.124 ============================================================================== 00:25:21.124 Range in us Cumulative IO count 00:25:21.124 8986.159 - 9043.396: 0.0110% ( 1) 00:25:21.124 9272.342 - 9329.579: 0.0660% ( 5) 00:25:21.124 9329.579 - 9386.816: 0.1430% ( 7) 00:25:21.124 9386.816 - 9444.052: 0.2091% ( 6) 00:25:21.124 9444.052 - 9501.289: 0.2861% ( 7) 00:25:21.124 9501.289 - 9558.526: 0.4952% ( 19) 00:25:21.124 9558.526 - 9615.762: 0.6052% ( 10) 00:25:21.124 9615.762 - 9672.999: 0.7702% ( 15) 00:25:21.124 9672.999 - 9730.236: 0.9133% ( 13) 00:25:21.124 9730.236 - 9787.472: 1.1884% ( 25) 00:25:21.124 9787.472 - 9844.709: 1.2764% ( 8) 00:25:21.124 9844.709 - 9901.946: 1.3864% ( 10) 00:25:21.124 9901.946 - 9959.183: 1.5075% ( 11) 00:25:21.124 9959.183 - 10016.419: 1.7276% ( 20) 00:25:21.124 10016.419 - 10073.656: 1.9256% ( 18) 00:25:21.124 10073.656 - 10130.893: 2.3658% ( 40) 00:25:21.124 10130.893 - 10188.129: 2.7949% ( 39) 00:25:21.124 10188.129 - 10245.366: 3.3341% ( 49) 00:25:21.125 10245.366 - 10302.603: 3.8952% ( 51) 00:25:21.125 10302.603 - 10359.839: 4.5224% ( 57) 00:25:21.125 10359.839 - 10417.076: 5.0616% ( 49) 00:25:21.125 10417.076 - 10474.313: 5.8649% ( 73) 00:25:21.125 10474.313 - 10531.549: 6.5471% ( 62) 00:25:21.125 10531.549 - 10588.786: 6.9762% ( 39) 00:25:21.125 10588.786 - 10646.023: 7.5484% ( 52) 00:25:21.125 10646.023 - 10703.259: 7.9776% ( 39) 00:25:21.125 10703.259 - 10760.496: 8.4177% ( 40) 00:25:21.125 10760.496 - 10817.733: 8.8468% ( 39) 00:25:21.125 10817.733 - 10874.969: 9.2870% ( 40) 00:25:21.125 10874.969 - 10932.206: 9.9692% ( 62) 00:25:21.125 10932.206 - 10989.443: 10.9705% ( 91) 00:25:21.125 10989.443 - 11046.679: 11.8178% ( 77) 00:25:21.125 11046.679 - 11103.916: 12.4890% ( 61) 00:25:21.125 11103.916 - 11161.153: 13.3143% ( 75) 00:25:21.125 11161.153 - 11218.390: 14.4256% ( 101) 00:25:21.125 11218.390 - 11275.626: 15.4930% ( 97) 00:25:21.125 11275.626 - 11332.863: 16.3842% ( 81) 00:25:21.125 11332.863 - 11390.100: 17.3415% ( 87) 00:25:21.125 11390.100 - 11447.336: 18.3209% ( 89) 00:25:21.125 11447.336 - 11504.573: 19.2782% ( 87) 00:25:21.125 11504.573 - 11561.810: 20.2575% ( 89) 00:25:21.125 11561.810 - 11619.046: 21.0057% ( 68) 00:25:21.125 11619.046 - 11676.283: 21.8200% ( 74) 00:25:21.125 11676.283 - 11733.520: 23.0414% ( 111) 00:25:21.125 11733.520 - 11790.756: 23.9657% ( 84) 00:25:21.125 11790.756 - 11847.993: 24.9560% ( 90) 00:25:21.125 11847.993 - 11905.230: 25.9243% ( 88) 00:25:21.125 11905.230 - 11962.466: 27.0026% ( 98) 00:25:21.125 11962.466 - 12019.703: 28.1580% ( 105) 00:25:21.125 12019.703 - 12076.940: 29.2584% ( 100) 00:25:21.125 12076.940 - 12134.176: 30.6228% ( 124) 00:25:21.125 12134.176 - 12191.413: 31.7672% ( 104) 00:25:21.125 12191.413 - 12248.650: 32.8675% ( 100) 00:25:21.125 12248.650 - 12305.886: 34.0449% ( 107) 00:25:21.125 12305.886 - 12363.123: 35.3323% ( 117) 00:25:21.125 12363.123 - 12420.360: 36.2456% ( 83) 00:25:21.125 12420.360 - 12477.597: 37.0489% ( 73) 00:25:21.125 12477.597 - 12534.833: 37.7531% ( 64) 00:25:21.125 12534.833 - 12592.070: 38.6554% ( 82) 00:25:21.125 12592.070 - 12649.307: 39.7227% ( 97) 00:25:21.125 12649.307 - 12706.543: 40.8891% ( 106) 00:25:21.125 12706.543 - 12763.780: 41.9234% ( 94) 00:25:21.125 12763.780 - 12821.017: 43.1558% ( 112) 00:25:21.125 12821.017 - 12878.253: 44.5092% ( 123) 00:25:21.125 12878.253 - 12935.490: 45.8957% ( 126) 00:25:21.125 12935.490 - 12992.727: 46.8310% ( 85) 00:25:21.125 12992.727 - 13049.963: 47.6783% ( 77) 00:25:21.125 13049.963 - 13107.200: 48.5255% ( 77) 00:25:21.125 13107.200 - 13164.437: 49.4718% ( 86) 00:25:21.125 13164.437 - 13221.673: 50.0770% ( 55) 00:25:21.125 13221.673 - 13278.910: 50.6492% ( 52) 00:25:21.125 13278.910 - 13336.147: 51.2764% ( 57) 00:25:21.125 13336.147 - 13393.383: 51.8156% ( 49) 00:25:21.125 13393.383 - 13450.620: 52.4428% ( 57) 00:25:21.125 13450.620 - 13507.857: 53.1580% ( 65) 00:25:21.125 13507.857 - 13565.093: 54.1043% ( 86) 00:25:21.125 13565.093 - 13622.330: 55.1166% ( 92) 00:25:21.125 13622.330 - 13679.567: 55.9419% ( 75) 00:25:21.125 13679.567 - 13736.803: 56.6351% ( 63) 00:25:21.125 13736.803 - 13794.040: 57.1743% ( 49) 00:25:21.125 13794.040 - 13851.277: 57.8565% ( 62) 00:25:21.125 13851.277 - 13908.514: 58.4397% ( 53) 00:25:21.125 13908.514 - 13965.750: 59.0999% ( 60) 00:25:21.125 13965.750 - 14022.987: 59.5290% ( 39) 00:25:21.125 14022.987 - 14080.224: 59.9252% ( 36) 00:25:21.125 14080.224 - 14137.460: 60.3763% ( 41) 00:25:21.125 14137.460 - 14194.697: 60.7945% ( 38) 00:25:21.125 14194.697 - 14251.934: 61.2786% ( 44) 00:25:21.125 14251.934 - 14309.170: 61.8398% ( 51) 00:25:21.125 14309.170 - 14366.407: 62.5110% ( 61) 00:25:21.125 14366.407 - 14423.644: 63.2152% ( 64) 00:25:21.125 14423.644 - 14480.880: 63.9745% ( 69) 00:25:21.125 14480.880 - 14538.117: 64.4696% ( 45) 00:25:21.125 14538.117 - 14595.354: 64.9428% ( 43) 00:25:21.125 14595.354 - 14652.590: 65.6030% ( 60) 00:25:21.125 14652.590 - 14767.064: 66.7584% ( 105) 00:25:21.125 14767.064 - 14881.537: 67.8587% ( 100) 00:25:21.125 14881.537 - 14996.010: 68.7280% ( 79) 00:25:21.125 14996.010 - 15110.484: 69.6413% ( 83) 00:25:21.125 15110.484 - 15224.957: 70.5546% ( 83) 00:25:21.125 15224.957 - 15339.431: 72.1501% ( 145) 00:25:21.125 15339.431 - 15453.904: 73.8776% ( 157) 00:25:21.125 15453.904 - 15568.377: 75.0770% ( 109) 00:25:21.125 15568.377 - 15682.851: 76.2874% ( 110) 00:25:21.125 15682.851 - 15797.324: 77.4538% ( 106) 00:25:21.125 15797.324 - 15911.797: 78.3891% ( 85) 00:25:21.125 15911.797 - 16026.271: 79.3574% ( 88) 00:25:21.125 16026.271 - 16140.744: 80.1827% ( 75) 00:25:21.125 16140.744 - 16255.217: 81.0409% ( 78) 00:25:21.125 16255.217 - 16369.691: 81.9102% ( 79) 00:25:21.125 16369.691 - 16484.164: 82.7355% ( 75) 00:25:21.125 16484.164 - 16598.638: 83.5057% ( 70) 00:25:21.125 16598.638 - 16713.111: 84.5841% ( 98) 00:25:21.125 16713.111 - 16827.584: 85.9485% ( 124) 00:25:21.125 16827.584 - 16942.058: 87.1479% ( 109) 00:25:21.125 16942.058 - 17056.531: 87.8741% ( 66) 00:25:21.125 17056.531 - 17171.004: 88.8864% ( 92) 00:25:21.125 17171.004 - 17285.478: 89.7997% ( 83) 00:25:21.125 17285.478 - 17399.951: 90.7130% ( 83) 00:25:21.125 17399.951 - 17514.424: 91.5823% ( 79) 00:25:21.125 17514.424 - 17628.898: 92.2425% ( 60) 00:25:21.125 17628.898 - 17743.371: 93.0348% ( 72) 00:25:21.125 17743.371 - 17857.845: 93.6400% ( 55) 00:25:21.125 17857.845 - 17972.318: 94.3772% ( 67) 00:25:21.125 17972.318 - 18086.791: 94.7513% ( 34) 00:25:21.125 18086.791 - 18201.265: 95.0484% ( 27) 00:25:21.125 18201.265 - 18315.738: 95.1915% ( 13) 00:25:21.125 18315.738 - 18430.211: 95.3675% ( 16) 00:25:21.125 18430.211 - 18544.685: 95.4886% ( 11) 00:25:21.125 18544.685 - 18659.158: 95.6206% ( 12) 00:25:21.125 18659.158 - 18773.631: 95.7857% ( 15) 00:25:21.125 18773.631 - 18888.105: 95.9837% ( 18) 00:25:21.125 18888.105 - 19002.578: 96.3358% ( 32) 00:25:21.125 19002.578 - 19117.052: 96.4459% ( 10) 00:25:21.125 19117.052 - 19231.525: 96.5779% ( 12) 00:25:21.125 19231.525 - 19345.998: 96.7760% ( 18) 00:25:21.125 19345.998 - 19460.472: 96.9190% ( 13) 00:25:21.125 19460.472 - 19574.945: 97.0180% ( 9) 00:25:21.125 19574.945 - 19689.418: 97.0951% ( 7) 00:25:21.125 19689.418 - 19803.892: 97.1391% ( 4) 00:25:21.125 19803.892 - 19918.365: 97.1611% ( 2) 00:25:21.125 19918.365 - 20032.838: 97.1831% ( 2) 00:25:21.125 20147.312 - 20261.785: 97.2051% ( 2) 00:25:21.125 20261.785 - 20376.259: 97.4692% ( 24) 00:25:21.125 20376.259 - 20490.732: 97.7003% ( 21) 00:25:21.125 20490.732 - 20605.205: 97.8873% ( 17) 00:25:21.125 20605.205 - 20719.679: 97.9864% ( 9) 00:25:21.125 20719.679 - 20834.152: 98.0744% ( 8) 00:25:21.125 20834.152 - 20948.625: 98.1514% ( 7) 00:25:21.125 20948.625 - 21063.099: 98.2284% ( 7) 00:25:21.125 21063.099 - 21177.572: 98.2724% ( 4) 00:25:21.125 21177.572 - 21292.045: 98.3935% ( 11) 00:25:21.125 21292.045 - 21406.519: 98.4595% ( 6) 00:25:21.125 21406.519 - 21520.992: 98.5035% ( 4) 00:25:21.125 21520.992 - 21635.466: 98.5585% ( 5) 00:25:21.125 21635.466 - 21749.939: 98.5915% ( 3) 00:25:21.125 30907.808 - 31136.755: 98.6356% ( 4) 00:25:21.125 31136.755 - 31365.701: 98.7236% ( 8) 00:25:21.125 31365.701 - 31594.648: 98.8116% ( 8) 00:25:21.125 31594.648 - 31823.595: 98.8886% ( 7) 00:25:21.125 31823.595 - 32052.541: 98.9547% ( 6) 00:25:21.125 32052.541 - 32281.488: 98.9987% ( 4) 00:25:21.125 32281.488 - 32510.435: 99.0427% ( 4) 00:25:21.125 32510.435 - 32739.382: 99.0977% ( 5) 00:25:21.125 32739.382 - 32968.328: 99.1527% ( 5) 00:25:21.125 32968.328 - 33197.275: 99.1967% ( 4) 00:25:21.125 33197.275 - 33426.222: 99.2518% ( 5) 00:25:21.125 33426.222 - 33655.169: 99.2958% ( 4) 00:25:21.125 38920.943 - 39149.890: 99.3068% ( 1) 00:25:21.125 39149.890 - 39378.837: 99.3728% ( 6) 00:25:21.125 39378.837 - 39607.783: 99.5379% ( 15) 00:25:21.125 39607.783 - 39836.730: 99.5489% ( 1) 00:25:21.125 40752.517 - 40981.464: 99.5709% ( 2) 00:25:21.125 40981.464 - 41210.410: 99.6259% ( 5) 00:25:21.125 41210.410 - 41439.357: 99.6919% ( 6) 00:25:21.125 41439.357 - 41668.304: 99.7359% ( 4) 00:25:21.125 41668.304 - 41897.251: 99.8019% ( 6) 00:25:21.125 41897.251 - 42126.197: 99.8460% ( 4) 00:25:21.125 42126.197 - 42355.144: 99.9120% ( 6) 00:25:21.125 42355.144 - 42584.091: 99.9670% ( 5) 00:25:21.125 42584.091 - 42813.038: 100.0000% ( 3) 00:25:21.125 00:25:21.125 13:45:28 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:25:21.125 00:25:21.125 real 0m2.688s 00:25:21.126 user 0m2.263s 00:25:21.126 sys 0m0.308s 00:25:21.126 13:45:28 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.126 13:45:28 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.126 ************************************ 00:25:21.126 END TEST nvme_perf 00:25:21.126 ************************************ 00:25:21.126 13:45:28 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:25:21.126 13:45:28 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:25:21.126 13:45:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.126 13:45:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:21.126 ************************************ 00:25:21.126 START TEST nvme_hello_world 00:25:21.126 ************************************ 00:25:21.126 13:45:28 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:25:21.385 Initializing NVMe Controllers 00:25:21.385 Attached to 0000:00:10.0 00:25:21.385 Namespace ID: 1 size: 6GB 00:25:21.385 Attached to 0000:00:11.0 00:25:21.385 Namespace ID: 1 size: 5GB 00:25:21.385 Attached to 0000:00:13.0 00:25:21.385 Namespace ID: 1 size: 1GB 00:25:21.385 Attached to 0000:00:12.0 00:25:21.385 Namespace ID: 1 size: 4GB 00:25:21.385 Namespace ID: 2 size: 4GB 00:25:21.385 Namespace ID: 3 size: 4GB 00:25:21.385 Initialization complete. 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 INFO: using host memory buffer for IO 00:25:21.385 Hello world! 00:25:21.385 00:25:21.385 real 0m0.286s 00:25:21.385 user 0m0.098s 00:25:21.385 sys 0m0.144s 00:25:21.385 13:45:28 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.385 13:45:28 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:25:21.385 ************************************ 00:25:21.385 END TEST nvme_hello_world 00:25:21.385 ************************************ 00:25:21.385 13:45:28 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:25:21.385 13:45:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:21.385 13:45:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.385 13:45:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:21.385 ************************************ 00:25:21.385 START TEST nvme_sgl 00:25:21.385 ************************************ 00:25:21.385 13:45:28 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:25:21.643 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:25:21.643 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:25:21.643 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:25:21.643 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:25:21.643 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:25:21.643 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:25:21.643 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:25:21.643 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:25:21.643 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:25:21.643 NVMe Readv/Writev Request test 00:25:21.643 Attached to 0000:00:10.0 00:25:21.643 Attached to 0000:00:11.0 00:25:21.643 Attached to 0000:00:13.0 00:25:21.643 Attached to 0000:00:12.0 00:25:21.643 0000:00:10.0: build_io_request_2 test passed 00:25:21.643 0000:00:10.0: build_io_request_4 test passed 00:25:21.643 0000:00:10.0: build_io_request_5 test passed 00:25:21.643 0000:00:10.0: build_io_request_6 test passed 00:25:21.643 0000:00:10.0: build_io_request_7 test passed 00:25:21.643 0000:00:10.0: build_io_request_10 test passed 00:25:21.643 0000:00:11.0: build_io_request_2 test passed 00:25:21.643 0000:00:11.0: build_io_request_4 test passed 00:25:21.643 0000:00:11.0: build_io_request_5 test passed 00:25:21.643 0000:00:11.0: build_io_request_6 test passed 00:25:21.643 0000:00:11.0: build_io_request_7 test passed 00:25:21.643 0000:00:11.0: build_io_request_10 test passed 00:25:21.643 Cleaning up... 00:25:21.643 00:25:21.643 real 0m0.360s 00:25:21.643 user 0m0.182s 00:25:21.643 sys 0m0.132s 00:25:21.902 ************************************ 00:25:21.902 END TEST nvme_sgl 00:25:21.902 ************************************ 00:25:21.902 13:45:29 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:21.902 13:45:29 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:25:21.902 13:45:29 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:25:21.902 13:45:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:21.902 13:45:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:21.902 13:45:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:21.902 ************************************ 00:25:21.902 START TEST nvme_e2edp 00:25:21.902 ************************************ 00:25:21.902 13:45:29 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:25:22.165 NVMe Write/Read with End-to-End data protection test 00:25:22.165 Attached to 0000:00:10.0 00:25:22.165 Attached to 0000:00:11.0 00:25:22.165 Attached to 0000:00:13.0 00:25:22.165 Attached to 0000:00:12.0 00:25:22.165 Cleaning up... 00:25:22.165 00:25:22.165 real 0m0.289s 00:25:22.165 user 0m0.096s 00:25:22.165 sys 0m0.147s 00:25:22.166 13:45:29 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.166 13:45:29 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:25:22.166 ************************************ 00:25:22.166 END TEST nvme_e2edp 00:25:22.166 ************************************ 00:25:22.166 13:45:29 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:25:22.166 13:45:29 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:22.166 13:45:29 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.166 13:45:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:22.166 ************************************ 00:25:22.166 START TEST nvme_reserve 00:25:22.166 ************************************ 00:25:22.166 13:45:29 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:25:22.425 ===================================================== 00:25:22.425 NVMe Controller at PCI bus 0, device 16, function 0 00:25:22.425 ===================================================== 00:25:22.425 Reservations: Not Supported 00:25:22.425 ===================================================== 00:25:22.425 NVMe Controller at PCI bus 0, device 17, function 0 00:25:22.425 ===================================================== 00:25:22.425 Reservations: Not Supported 00:25:22.425 ===================================================== 00:25:22.425 NVMe Controller at PCI bus 0, device 19, function 0 00:25:22.425 ===================================================== 00:25:22.425 Reservations: Not Supported 00:25:22.425 ===================================================== 00:25:22.425 NVMe Controller at PCI bus 0, device 18, function 0 00:25:22.425 ===================================================== 00:25:22.425 Reservations: Not Supported 00:25:22.425 Reservation test passed 00:25:22.425 ************************************ 00:25:22.425 END TEST nvme_reserve 00:25:22.425 ************************************ 00:25:22.425 00:25:22.425 real 0m0.286s 00:25:22.425 user 0m0.105s 00:25:22.425 sys 0m0.134s 00:25:22.425 13:45:30 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.425 13:45:30 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:25:22.425 13:45:30 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:25:22.425 13:45:30 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:22.425 13:45:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.425 13:45:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:22.425 ************************************ 00:25:22.425 START TEST nvme_err_injection 00:25:22.425 ************************************ 00:25:22.425 13:45:30 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:25:22.994 NVMe Error Injection test 00:25:22.994 Attached to 0000:00:10.0 00:25:22.994 Attached to 0000:00:11.0 00:25:22.994 Attached to 0000:00:13.0 00:25:22.994 Attached to 0000:00:12.0 00:25:22.994 0000:00:10.0: get features failed as expected 00:25:22.994 0000:00:11.0: get features failed as expected 00:25:22.994 0000:00:13.0: get features failed as expected 00:25:22.994 0000:00:12.0: get features failed as expected 00:25:22.994 0000:00:10.0: get features successfully as expected 00:25:22.994 0000:00:11.0: get features successfully as expected 00:25:22.994 0000:00:13.0: get features successfully as expected 00:25:22.994 0000:00:12.0: get features successfully as expected 00:25:22.994 0000:00:10.0: read failed as expected 00:25:22.994 0000:00:11.0: read failed as expected 00:25:22.994 0000:00:13.0: read failed as expected 00:25:22.994 0000:00:12.0: read failed as expected 00:25:22.994 0000:00:10.0: read successfully as expected 00:25:22.994 0000:00:11.0: read successfully as expected 00:25:22.994 0000:00:13.0: read successfully as expected 00:25:22.994 0000:00:12.0: read successfully as expected 00:25:22.994 Cleaning up... 00:25:22.994 00:25:22.994 real 0m0.304s 00:25:22.994 user 0m0.106s 00:25:22.994 sys 0m0.151s 00:25:22.994 13:45:30 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:22.994 13:45:30 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 ************************************ 00:25:22.994 END TEST nvme_err_injection 00:25:22.994 ************************************ 00:25:22.994 13:45:30 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:25:22.994 13:45:30 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:25:22.994 13:45:30 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:22.994 13:45:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:22.994 ************************************ 00:25:22.994 START TEST nvme_overhead 00:25:22.994 ************************************ 00:25:22.994 13:45:30 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:25:24.410 Initializing NVMe Controllers 00:25:24.410 Attached to 0000:00:10.0 00:25:24.410 Attached to 0000:00:11.0 00:25:24.410 Attached to 0000:00:13.0 00:25:24.410 Attached to 0000:00:12.0 00:25:24.410 Initialization complete. Launching workers. 00:25:24.410 submit (in ns) avg, min, max = 13323.3, 9527.5, 48338.9 00:25:24.410 complete (in ns) avg, min, max = 7657.3, 5557.2, 1225648.9 00:25:24.410 00:25:24.410 Submit histogram 00:25:24.410 ================ 00:25:24.410 Range in us Cumulative Count 00:25:24.410 9.502 - 9.558: 0.0131% ( 1) 00:25:24.410 10.508 - 10.564: 0.0262% ( 1) 00:25:24.410 10.732 - 10.788: 0.0785% ( 4) 00:25:24.410 10.788 - 10.844: 0.1178% ( 3) 00:25:24.410 10.844 - 10.900: 0.2225% ( 8) 00:25:24.410 10.900 - 10.955: 0.3010% ( 6) 00:25:24.410 10.955 - 11.011: 0.4711% ( 13) 00:25:24.410 11.011 - 11.067: 0.6281% ( 12) 00:25:24.410 11.067 - 11.123: 0.8636% ( 18) 00:25:24.410 11.123 - 11.179: 1.2693% ( 31) 00:25:24.410 11.179 - 11.235: 1.7666% ( 38) 00:25:24.410 11.235 - 11.291: 2.2769% ( 39) 00:25:24.410 11.291 - 11.347: 3.0228% ( 57) 00:25:24.410 11.347 - 11.403: 3.9649% ( 72) 00:25:24.410 11.403 - 11.459: 5.1819% ( 93) 00:25:24.410 11.459 - 11.514: 6.5559% ( 105) 00:25:24.410 11.514 - 11.570: 7.9037% ( 103) 00:25:24.410 11.570 - 11.626: 9.3693% ( 112) 00:25:24.410 11.626 - 11.682: 10.8741% ( 115) 00:25:24.410 11.682 - 11.738: 12.7846% ( 146) 00:25:24.410 11.738 - 11.794: 14.7605% ( 151) 00:25:24.410 11.794 - 11.850: 16.3831% ( 124) 00:25:24.410 11.850 - 11.906: 18.2675% ( 144) 00:25:24.410 11.906 - 11.962: 20.0209% ( 134) 00:25:24.410 11.962 - 12.017: 21.8660% ( 141) 00:25:24.410 12.017 - 12.073: 23.7111% ( 141) 00:25:24.410 12.073 - 12.129: 25.5823% ( 143) 00:25:24.410 12.129 - 12.185: 27.2703% ( 129) 00:25:24.410 12.185 - 12.241: 29.1939% ( 147) 00:25:24.410 12.241 - 12.297: 30.9343% ( 133) 00:25:24.410 12.297 - 12.353: 32.8841% ( 149) 00:25:24.410 12.353 - 12.409: 35.0301% ( 164) 00:25:24.410 12.409 - 12.465: 37.0714% ( 156) 00:25:24.410 12.465 - 12.521: 39.1390% ( 158) 00:25:24.410 12.521 - 12.576: 41.2196% ( 159) 00:25:24.410 12.576 - 12.632: 43.4572% ( 171) 00:25:24.410 12.632 - 12.688: 45.7864% ( 178) 00:25:24.410 12.688 - 12.744: 48.1811% ( 183) 00:25:24.410 12.744 - 12.800: 50.5103% ( 178) 00:25:24.410 12.800 - 12.856: 52.6040% ( 160) 00:25:24.410 12.856 - 12.912: 55.0772% ( 189) 00:25:24.410 12.912 - 12.968: 57.3803% ( 176) 00:25:24.410 12.968 - 13.024: 59.5656% ( 167) 00:25:24.410 13.024 - 13.079: 61.5546% ( 152) 00:25:24.410 13.079 - 13.135: 63.5567% ( 153) 00:25:24.410 13.135 - 13.191: 65.3494% ( 137) 00:25:24.410 13.191 - 13.247: 66.9720% ( 124) 00:25:24.410 13.247 - 13.303: 68.8171% ( 141) 00:25:24.410 13.303 - 13.359: 70.5182% ( 130) 00:25:24.410 13.359 - 13.415: 72.4025% ( 144) 00:25:24.410 13.415 - 13.471: 73.8027% ( 107) 00:25:24.410 13.471 - 13.527: 75.3075% ( 115) 00:25:24.410 13.527 - 13.583: 76.7208% ( 108) 00:25:24.410 13.583 - 13.638: 77.8592% ( 87) 00:25:24.410 13.638 - 13.694: 78.8668% ( 77) 00:25:24.410 13.694 - 13.750: 80.0052% ( 87) 00:25:24.410 13.750 - 13.806: 81.0652% ( 81) 00:25:24.410 13.806 - 13.862: 81.8503% ( 60) 00:25:24.410 13.862 - 13.918: 82.8186% ( 74) 00:25:24.410 13.918 - 13.974: 83.3813% ( 43) 00:25:24.410 13.974 - 14.030: 84.0748% ( 53) 00:25:24.410 14.030 - 14.086: 84.6768% ( 46) 00:25:24.410 14.086 - 14.141: 85.2133% ( 41) 00:25:24.410 14.141 - 14.197: 85.7236% ( 39) 00:25:24.410 14.197 - 14.253: 86.1293% ( 31) 00:25:24.410 14.253 - 14.309: 86.4041% ( 21) 00:25:24.410 14.309 - 14.421: 87.1238% ( 55) 00:25:24.410 14.421 - 14.533: 87.5818% ( 35) 00:25:24.410 14.533 - 14.645: 87.9089% ( 25) 00:25:24.410 14.645 - 14.756: 88.1968% ( 22) 00:25:24.410 14.756 - 14.868: 88.4585% ( 20) 00:25:24.410 14.868 - 14.980: 88.9427% ( 37) 00:25:24.410 14.980 - 15.092: 89.3876% ( 34) 00:25:24.410 15.092 - 15.203: 89.7278% ( 26) 00:25:24.410 15.203 - 15.315: 90.1596% ( 33) 00:25:24.410 15.315 - 15.427: 90.5784% ( 32) 00:25:24.410 15.427 - 15.539: 90.9448% ( 28) 00:25:24.410 15.539 - 15.651: 91.2065% ( 20) 00:25:24.410 15.651 - 15.762: 91.6252% ( 32) 00:25:24.410 15.762 - 15.874: 91.9393% ( 24) 00:25:24.410 15.874 - 15.986: 92.1748% ( 18) 00:25:24.410 15.986 - 16.098: 92.3973% ( 17) 00:25:24.410 16.098 - 16.210: 92.6721% ( 21) 00:25:24.410 16.210 - 16.321: 92.9338% ( 20) 00:25:24.410 16.321 - 16.433: 93.1562% ( 17) 00:25:24.410 16.433 - 16.545: 93.3133% ( 12) 00:25:24.410 16.545 - 16.657: 93.5226% ( 16) 00:25:24.411 16.657 - 16.769: 93.8105% ( 22) 00:25:24.411 16.769 - 16.880: 93.9937% ( 14) 00:25:24.411 16.880 - 16.992: 94.1377% ( 11) 00:25:24.411 16.992 - 17.104: 94.2816% ( 11) 00:25:24.411 17.104 - 17.216: 94.5302% ( 19) 00:25:24.411 17.216 - 17.328: 94.7134% ( 14) 00:25:24.411 17.328 - 17.439: 94.8705% ( 12) 00:25:24.411 17.439 - 17.551: 95.0275% ( 12) 00:25:24.411 17.551 - 17.663: 95.1583% ( 10) 00:25:24.411 17.663 - 17.775: 95.2892% ( 10) 00:25:24.411 17.775 - 17.886: 95.4070% ( 9) 00:25:24.411 17.886 - 17.998: 95.5902% ( 14) 00:25:24.411 17.998 - 18.110: 95.6687% ( 6) 00:25:24.411 18.110 - 18.222: 95.8126% ( 11) 00:25:24.411 18.222 - 18.334: 95.9173% ( 8) 00:25:24.411 18.334 - 18.445: 95.9958% ( 6) 00:25:24.411 18.445 - 18.557: 96.1528% ( 12) 00:25:24.411 18.557 - 18.669: 96.2575% ( 8) 00:25:24.411 18.669 - 18.781: 96.3622% ( 8) 00:25:24.411 18.781 - 18.893: 96.4800% ( 9) 00:25:24.411 18.893 - 19.004: 96.5977% ( 9) 00:25:24.411 19.004 - 19.116: 96.6893% ( 7) 00:25:24.411 19.228 - 19.340: 96.8071% ( 9) 00:25:24.411 19.340 - 19.452: 96.8987% ( 7) 00:25:24.411 19.452 - 19.563: 97.0557% ( 12) 00:25:24.411 19.563 - 19.675: 97.1604% ( 8) 00:25:24.411 19.675 - 19.787: 97.2128% ( 4) 00:25:24.411 19.787 - 19.899: 97.2913% ( 6) 00:25:24.411 19.899 - 20.010: 97.3436% ( 4) 00:25:24.411 20.010 - 20.122: 97.3829% ( 3) 00:25:24.411 20.122 - 20.234: 97.4352% ( 4) 00:25:24.411 20.234 - 20.346: 97.5923% ( 12) 00:25:24.411 20.346 - 20.458: 97.6969% ( 8) 00:25:24.411 20.458 - 20.569: 97.7362% ( 3) 00:25:24.411 20.569 - 20.681: 97.7885% ( 4) 00:25:24.411 20.681 - 20.793: 97.8278% ( 3) 00:25:24.411 20.793 - 20.905: 97.8409% ( 1) 00:25:24.411 20.905 - 21.017: 97.8932% ( 4) 00:25:24.411 21.017 - 21.128: 97.9063% ( 1) 00:25:24.411 21.128 - 21.240: 97.9194% ( 1) 00:25:24.411 21.240 - 21.352: 97.9586% ( 3) 00:25:24.411 21.352 - 21.464: 97.9979% ( 3) 00:25:24.411 21.464 - 21.576: 98.0372% ( 3) 00:25:24.411 21.576 - 21.687: 98.0633% ( 2) 00:25:24.411 21.687 - 21.799: 98.0895% ( 2) 00:25:24.411 21.799 - 21.911: 98.1026% ( 1) 00:25:24.411 21.911 - 22.023: 98.1157% ( 1) 00:25:24.411 22.023 - 22.134: 98.1288% ( 1) 00:25:24.411 22.134 - 22.246: 98.1418% ( 1) 00:25:24.411 22.246 - 22.358: 98.1549% ( 1) 00:25:24.411 22.358 - 22.470: 98.1811% ( 2) 00:25:24.411 22.470 - 22.582: 98.2073% ( 2) 00:25:24.411 22.582 - 22.693: 98.2204% ( 1) 00:25:24.411 22.805 - 22.917: 98.2596% ( 3) 00:25:24.411 22.917 - 23.029: 98.2858% ( 2) 00:25:24.411 23.029 - 23.141: 98.3120% ( 2) 00:25:24.411 23.141 - 23.252: 98.4036% ( 7) 00:25:24.411 23.252 - 23.364: 98.4559% ( 4) 00:25:24.411 23.364 - 23.476: 98.5344% ( 6) 00:25:24.411 23.476 - 23.588: 98.5737% ( 3) 00:25:24.411 23.588 - 23.700: 98.6653% ( 7) 00:25:24.411 23.700 - 23.811: 98.6914% ( 2) 00:25:24.411 23.811 - 23.923: 98.7307% ( 3) 00:25:24.411 23.923 - 24.035: 98.7961% ( 5) 00:25:24.411 24.035 - 24.147: 98.8877% ( 7) 00:25:24.411 24.147 - 24.259: 98.9270% ( 3) 00:25:24.411 24.259 - 24.370: 98.9662% ( 3) 00:25:24.411 24.370 - 24.482: 99.0055% ( 3) 00:25:24.411 24.482 - 24.594: 99.0840% ( 6) 00:25:24.411 24.594 - 24.706: 99.1364% ( 4) 00:25:24.411 24.706 - 24.817: 99.1756% ( 3) 00:25:24.411 24.817 - 24.929: 99.2149% ( 3) 00:25:24.411 24.929 - 25.041: 99.2410% ( 2) 00:25:24.411 25.041 - 25.153: 99.2672% ( 2) 00:25:24.411 25.153 - 25.265: 99.2803% ( 1) 00:25:24.411 25.265 - 25.376: 99.3065% ( 2) 00:25:24.411 25.488 - 25.600: 99.3326% ( 2) 00:25:24.411 25.600 - 25.712: 99.3457% ( 1) 00:25:24.411 25.824 - 25.935: 99.3588% ( 1) 00:25:24.411 25.935 - 26.047: 99.3719% ( 1) 00:25:24.411 26.047 - 26.159: 99.3850% ( 1) 00:25:24.411 26.271 - 26.383: 99.4242% ( 3) 00:25:24.411 26.718 - 26.830: 99.4504% ( 2) 00:25:24.411 26.830 - 26.941: 99.4635% ( 1) 00:25:24.411 27.053 - 27.165: 99.4766% ( 1) 00:25:24.411 27.165 - 27.277: 99.5027% ( 2) 00:25:24.411 27.389 - 27.500: 99.5158% ( 1) 00:25:24.411 27.500 - 27.612: 99.5289% ( 1) 00:25:24.411 27.612 - 27.724: 99.5551% ( 2) 00:25:24.411 27.836 - 27.948: 99.5682% ( 1) 00:25:24.411 27.948 - 28.059: 99.5943% ( 2) 00:25:24.411 28.059 - 28.171: 99.6074% ( 1) 00:25:24.411 28.171 - 28.283: 99.6336% ( 2) 00:25:24.411 28.395 - 28.507: 99.6729% ( 3) 00:25:24.411 28.507 - 28.618: 99.7121% ( 3) 00:25:24.411 28.618 - 28.842: 99.7514% ( 3) 00:25:24.411 29.066 - 29.289: 99.7775% ( 2) 00:25:24.411 29.289 - 29.513: 99.7906% ( 1) 00:25:24.411 29.736 - 29.960: 99.8037% ( 1) 00:25:24.411 30.183 - 30.407: 99.8299% ( 2) 00:25:24.411 30.407 - 30.631: 99.8691% ( 3) 00:25:24.411 30.631 - 30.854: 99.8822% ( 1) 00:25:24.411 31.301 - 31.525: 99.8953% ( 1) 00:25:24.411 31.525 - 31.748: 99.9084% ( 1) 00:25:24.411 31.972 - 32.196: 99.9215% ( 1) 00:25:24.411 33.537 - 33.761: 99.9346% ( 1) 00:25:24.411 33.761 - 33.984: 99.9607% ( 2) 00:25:24.411 38.679 - 38.903: 99.9738% ( 1) 00:25:24.411 41.139 - 41.362: 99.9869% ( 1) 00:25:24.411 48.293 - 48.517: 100.0000% ( 1) 00:25:24.411 00:25:24.411 Complete histogram 00:25:24.411 ================== 00:25:24.411 Range in us Cumulative Count 00:25:24.411 5.534 - 5.562: 0.0262% ( 2) 00:25:24.411 5.562 - 5.590: 0.0393% ( 1) 00:25:24.411 5.590 - 5.617: 0.0654% ( 2) 00:25:24.411 5.617 - 5.645: 0.1570% ( 7) 00:25:24.411 5.645 - 5.673: 0.2355% ( 6) 00:25:24.411 5.673 - 5.701: 0.3664% ( 10) 00:25:24.411 5.701 - 5.729: 0.4580% ( 7) 00:25:24.411 5.729 - 5.757: 0.5234% ( 5) 00:25:24.411 5.757 - 5.785: 0.7066% ( 14) 00:25:24.411 5.785 - 5.813: 0.9160% ( 16) 00:25:24.411 5.813 - 5.841: 1.1777% ( 20) 00:25:24.411 5.841 - 5.869: 1.6488% ( 36) 00:25:24.411 5.869 - 5.897: 2.1460% ( 38) 00:25:24.411 5.897 - 5.925: 2.6302% ( 37) 00:25:24.411 5.925 - 5.953: 3.2976% ( 51) 00:25:24.411 5.953 - 5.981: 4.3837% ( 83) 00:25:24.411 5.981 - 6.009: 5.2081% ( 63) 00:25:24.411 6.009 - 6.037: 6.2026% ( 76) 00:25:24.411 6.037 - 6.065: 7.3672% ( 89) 00:25:24.411 6.065 - 6.093: 8.3617% ( 76) 00:25:24.411 6.093 - 6.121: 9.2777% ( 70) 00:25:24.411 6.121 - 6.148: 10.2984% ( 78) 00:25:24.411 6.148 - 6.176: 11.5546% ( 96) 00:25:24.411 6.176 - 6.204: 12.3659% ( 62) 00:25:24.411 6.204 - 6.232: 13.1772% ( 62) 00:25:24.411 6.232 - 6.260: 14.1193% ( 72) 00:25:24.411 6.260 - 6.288: 15.0092% ( 68) 00:25:24.411 6.288 - 6.316: 15.8074% ( 61) 00:25:24.411 6.316 - 6.344: 16.7234% ( 70) 00:25:24.411 6.344 - 6.372: 17.7702% ( 80) 00:25:24.411 6.372 - 6.400: 18.5684% ( 61) 00:25:24.411 6.400 - 6.428: 19.3797% ( 62) 00:25:24.411 6.428 - 6.456: 20.3350% ( 73) 00:25:24.411 6.456 - 6.484: 21.2248% ( 68) 00:25:24.411 6.484 - 6.512: 22.1931% ( 74) 00:25:24.411 6.512 - 6.540: 23.4624% ( 97) 00:25:24.411 6.540 - 6.568: 24.9935% ( 117) 00:25:24.411 6.568 - 6.596: 26.8254% ( 140) 00:25:24.411 6.596 - 6.624: 28.6836% ( 142) 00:25:24.411 6.624 - 6.652: 30.6203% ( 148) 00:25:24.411 6.652 - 6.679: 32.6354% ( 154) 00:25:24.411 6.679 - 6.707: 34.3496% ( 131) 00:25:24.411 6.707 - 6.735: 35.8807% ( 117) 00:25:24.411 6.735 - 6.763: 37.4902% ( 123) 00:25:24.411 6.763 - 6.791: 39.1390% ( 126) 00:25:24.411 6.791 - 6.819: 40.6176% ( 113) 00:25:24.411 6.819 - 6.847: 41.9916% ( 105) 00:25:24.411 6.847 - 6.875: 42.9861% ( 76) 00:25:24.411 6.875 - 6.903: 44.3078% ( 101) 00:25:24.411 6.903 - 6.931: 45.6163% ( 100) 00:25:24.411 6.931 - 6.959: 46.8202% ( 92) 00:25:24.411 6.959 - 6.987: 47.9194% ( 84) 00:25:24.411 6.987 - 7.015: 48.7961% ( 67) 00:25:24.411 7.015 - 7.043: 49.7775% ( 75) 00:25:24.411 7.043 - 7.071: 50.9029% ( 86) 00:25:24.411 7.071 - 7.099: 52.2115% ( 100) 00:25:24.411 7.099 - 7.127: 53.4153% ( 92) 00:25:24.411 7.127 - 7.155: 54.3706% ( 73) 00:25:24.411 7.155 - 7.210: 56.5951% ( 170) 00:25:24.411 7.210 - 7.266: 59.0683% ( 189) 00:25:24.411 7.266 - 7.322: 61.4237% ( 180) 00:25:24.411 7.322 - 7.378: 64.3811% ( 226) 00:25:24.411 7.378 - 7.434: 66.9851% ( 199) 00:25:24.411 7.434 - 7.490: 69.1180% ( 163) 00:25:24.411 7.490 - 7.546: 70.9108% ( 137) 00:25:24.411 7.546 - 7.602: 72.7558% ( 141) 00:25:24.411 7.602 - 7.658: 74.1167% ( 104) 00:25:24.411 7.658 - 7.714: 75.6870% ( 120) 00:25:24.411 7.714 - 7.769: 76.9694% ( 98) 00:25:24.411 7.769 - 7.825: 78.2256% ( 96) 00:25:24.411 7.825 - 7.881: 79.6519% ( 109) 00:25:24.411 7.881 - 7.937: 80.7119% ( 81) 00:25:24.411 7.937 - 7.993: 81.6017% ( 68) 00:25:24.411 7.993 - 8.049: 82.4915% ( 68) 00:25:24.412 8.049 - 8.105: 83.1981% ( 54) 00:25:24.412 8.105 - 8.161: 84.0879% ( 68) 00:25:24.412 8.161 - 8.217: 84.9123% ( 63) 00:25:24.412 8.217 - 8.272: 85.6582% ( 57) 00:25:24.412 8.272 - 8.328: 86.3517% ( 53) 00:25:24.412 8.328 - 8.384: 86.8097% ( 35) 00:25:24.412 8.384 - 8.440: 87.1761% ( 28) 00:25:24.412 8.440 - 8.496: 87.5294% ( 27) 00:25:24.412 8.496 - 8.552: 87.8697% ( 26) 00:25:24.412 8.552 - 8.608: 88.2622% ( 30) 00:25:24.412 8.608 - 8.664: 88.3931% ( 10) 00:25:24.412 8.664 - 8.720: 88.5239% ( 10) 00:25:24.412 8.720 - 8.776: 88.6810% ( 12) 00:25:24.412 8.776 - 8.831: 88.8249% ( 11) 00:25:24.412 8.831 - 8.887: 89.0212% ( 15) 00:25:24.412 8.887 - 8.943: 89.1521% ( 10) 00:25:24.412 8.943 - 8.999: 89.2960% ( 11) 00:25:24.412 8.999 - 9.055: 89.5054% ( 16) 00:25:24.412 9.055 - 9.111: 89.8194% ( 24) 00:25:24.412 9.111 - 9.167: 90.4344% ( 47) 00:25:24.412 9.167 - 9.223: 90.8794% ( 34) 00:25:24.412 9.223 - 9.279: 91.4682% ( 45) 00:25:24.412 9.279 - 9.334: 91.8084% ( 26) 00:25:24.412 9.334 - 9.390: 92.0701% ( 20) 00:25:24.412 9.390 - 9.446: 92.2664% ( 15) 00:25:24.412 9.446 - 9.502: 92.5020% ( 18) 00:25:24.412 9.502 - 9.558: 92.6852% ( 14) 00:25:24.412 9.558 - 9.614: 92.9338% ( 19) 00:25:24.412 9.614 - 9.670: 93.0777% ( 11) 00:25:24.412 9.670 - 9.726: 93.3656% ( 22) 00:25:24.412 9.726 - 9.782: 93.5881% ( 17) 00:25:24.412 9.782 - 9.838: 93.8105% ( 17) 00:25:24.412 9.838 - 9.893: 93.9806% ( 13) 00:25:24.412 9.893 - 9.949: 94.2554% ( 21) 00:25:24.412 9.949 - 10.005: 94.3863% ( 10) 00:25:24.412 10.005 - 10.061: 94.5957% ( 16) 00:25:24.412 10.061 - 10.117: 94.6611% ( 5) 00:25:24.412 10.117 - 10.173: 94.7919% ( 10) 00:25:24.412 10.173 - 10.229: 94.9751% ( 14) 00:25:24.412 10.229 - 10.285: 95.0275% ( 4) 00:25:24.412 10.285 - 10.341: 95.1452% ( 9) 00:25:24.412 10.341 - 10.397: 95.1845% ( 3) 00:25:24.412 10.397 - 10.452: 95.3939% ( 16) 00:25:24.412 10.452 - 10.508: 95.4593% ( 5) 00:25:24.412 10.508 - 10.564: 95.5771% ( 9) 00:25:24.412 10.564 - 10.620: 95.6687% ( 7) 00:25:24.412 10.620 - 10.676: 95.7079% ( 3) 00:25:24.412 10.676 - 10.732: 95.7603% ( 4) 00:25:24.412 10.732 - 10.788: 95.8519% ( 7) 00:25:24.412 10.788 - 10.844: 95.9042% ( 4) 00:25:24.412 10.844 - 10.900: 96.0220% ( 9) 00:25:24.412 10.900 - 10.955: 96.1267% ( 8) 00:25:24.412 10.955 - 11.011: 96.1659% ( 3) 00:25:24.412 11.011 - 11.067: 96.2706% ( 8) 00:25:24.412 11.067 - 11.123: 96.3622% ( 7) 00:25:24.412 11.123 - 11.179: 96.4276% ( 5) 00:25:24.412 11.179 - 11.235: 96.5323% ( 8) 00:25:24.412 11.235 - 11.291: 96.5847% ( 4) 00:25:24.412 11.291 - 11.347: 96.5977% ( 1) 00:25:24.412 11.347 - 11.403: 96.6501% ( 4) 00:25:24.412 11.403 - 11.459: 96.7417% ( 7) 00:25:24.412 11.459 - 11.514: 96.7679% ( 2) 00:25:24.412 11.514 - 11.570: 96.7809% ( 1) 00:25:24.412 11.570 - 11.626: 96.8333% ( 4) 00:25:24.412 11.626 - 11.682: 96.8856% ( 4) 00:25:24.412 11.682 - 11.738: 96.9118% ( 2) 00:25:24.412 11.738 - 11.794: 96.9903% ( 6) 00:25:24.412 11.794 - 11.850: 97.0296% ( 3) 00:25:24.412 11.850 - 11.906: 97.0950% ( 5) 00:25:24.412 11.906 - 11.962: 97.1604% ( 5) 00:25:24.412 12.017 - 12.073: 97.2389% ( 6) 00:25:24.412 12.073 - 12.129: 97.2913% ( 4) 00:25:24.412 12.129 - 12.185: 97.3175% ( 2) 00:25:24.412 12.185 - 12.241: 97.3829% ( 5) 00:25:24.412 12.241 - 12.297: 97.4221% ( 3) 00:25:24.412 12.297 - 12.353: 97.4614% ( 3) 00:25:24.412 12.353 - 12.409: 97.4745% ( 1) 00:25:24.412 12.409 - 12.465: 97.5137% ( 3) 00:25:24.412 12.465 - 12.521: 97.5923% ( 6) 00:25:24.412 12.521 - 12.576: 97.6184% ( 2) 00:25:24.412 12.576 - 12.632: 97.6839% ( 5) 00:25:24.412 12.688 - 12.744: 97.6969% ( 1) 00:25:24.412 12.744 - 12.800: 97.7100% ( 1) 00:25:24.412 12.800 - 12.856: 97.7493% ( 3) 00:25:24.412 12.856 - 12.912: 97.8147% ( 5) 00:25:24.412 12.912 - 12.968: 97.8801% ( 5) 00:25:24.412 12.968 - 13.024: 97.9194% ( 3) 00:25:24.412 13.024 - 13.079: 97.9586% ( 3) 00:25:24.412 13.079 - 13.135: 97.9979% ( 3) 00:25:24.412 13.191 - 13.247: 98.0110% ( 1) 00:25:24.412 13.247 - 13.303: 98.0633% ( 4) 00:25:24.412 13.303 - 13.359: 98.0764% ( 1) 00:25:24.412 13.359 - 13.415: 98.1157% ( 3) 00:25:24.412 13.415 - 13.471: 98.1418% ( 2) 00:25:24.412 13.471 - 13.527: 98.1680% ( 2) 00:25:24.412 13.527 - 13.583: 98.1811% ( 1) 00:25:24.412 13.583 - 13.638: 98.1942% ( 1) 00:25:24.412 13.638 - 13.694: 98.2073% ( 1) 00:25:24.412 13.694 - 13.750: 98.2334% ( 2) 00:25:24.412 13.750 - 13.806: 98.2596% ( 2) 00:25:24.412 13.806 - 13.862: 98.2858% ( 2) 00:25:24.412 13.862 - 13.918: 98.2989% ( 1) 00:25:24.412 13.974 - 14.030: 98.3120% ( 1) 00:25:24.412 14.030 - 14.086: 98.3250% ( 1) 00:25:24.412 14.086 - 14.141: 98.3381% ( 1) 00:25:24.412 14.141 - 14.197: 98.3643% ( 2) 00:25:24.412 14.197 - 14.253: 98.3905% ( 2) 00:25:24.412 14.253 - 14.309: 98.4036% ( 1) 00:25:24.412 14.309 - 14.421: 98.4166% ( 1) 00:25:24.412 14.421 - 14.533: 98.4428% ( 2) 00:25:24.412 14.533 - 14.645: 98.4690% ( 2) 00:25:24.412 14.645 - 14.756: 98.4952% ( 2) 00:25:24.412 14.756 - 14.868: 98.5082% ( 1) 00:25:24.412 14.868 - 14.980: 98.5213% ( 1) 00:25:24.412 14.980 - 15.092: 98.5344% ( 1) 00:25:24.412 15.092 - 15.203: 98.5737% ( 3) 00:25:24.412 15.315 - 15.427: 98.5868% ( 1) 00:25:24.412 15.539 - 15.651: 98.5998% ( 1) 00:25:24.412 15.874 - 15.986: 98.6129% ( 1) 00:25:24.412 16.098 - 16.210: 98.6260% ( 1) 00:25:24.412 16.210 - 16.321: 98.6784% ( 4) 00:25:24.412 16.545 - 16.657: 98.6914% ( 1) 00:25:24.412 16.657 - 16.769: 98.7045% ( 1) 00:25:24.412 16.769 - 16.880: 98.7176% ( 1) 00:25:24.412 16.880 - 16.992: 98.7438% ( 2) 00:25:24.412 16.992 - 17.104: 98.7700% ( 2) 00:25:24.412 17.104 - 17.216: 98.8223% ( 4) 00:25:24.412 17.216 - 17.328: 98.8485% ( 2) 00:25:24.412 17.328 - 17.439: 98.8746% ( 2) 00:25:24.412 17.439 - 17.551: 98.9270% ( 4) 00:25:24.412 17.551 - 17.663: 99.0186% ( 7) 00:25:24.412 17.663 - 17.775: 99.0578% ( 3) 00:25:24.412 17.775 - 17.886: 99.0709% ( 1) 00:25:24.412 17.886 - 17.998: 99.0971% ( 2) 00:25:24.412 17.998 - 18.110: 99.1364% ( 3) 00:25:24.412 18.110 - 18.222: 99.2149% ( 6) 00:25:24.412 18.222 - 18.334: 99.2672% ( 4) 00:25:24.412 18.334 - 18.445: 99.2934% ( 2) 00:25:24.412 18.445 - 18.557: 99.3326% ( 3) 00:25:24.412 18.557 - 18.669: 99.3850% ( 4) 00:25:24.412 18.669 - 18.781: 99.4242% ( 3) 00:25:24.412 18.781 - 18.893: 99.4373% ( 1) 00:25:24.412 19.004 - 19.116: 99.4504% ( 1) 00:25:24.412 19.116 - 19.228: 99.4766% ( 2) 00:25:24.412 19.228 - 19.340: 99.4897% ( 1) 00:25:24.412 19.340 - 19.452: 99.5158% ( 2) 00:25:24.412 19.563 - 19.675: 99.5420% ( 2) 00:25:24.412 19.787 - 19.899: 99.5551% ( 1) 00:25:24.412 20.010 - 20.122: 99.5682% ( 1) 00:25:24.412 20.234 - 20.346: 99.5813% ( 1) 00:25:24.412 20.346 - 20.458: 99.6074% ( 2) 00:25:24.412 20.681 - 20.793: 99.6336% ( 2) 00:25:24.412 20.905 - 21.017: 99.6467% ( 1) 00:25:24.412 21.017 - 21.128: 99.6598% ( 1) 00:25:24.412 21.352 - 21.464: 99.6729% ( 1) 00:25:24.412 21.799 - 21.911: 99.6859% ( 1) 00:25:24.412 21.911 - 22.023: 99.6990% ( 1) 00:25:24.412 22.470 - 22.582: 99.7121% ( 1) 00:25:24.412 22.582 - 22.693: 99.7252% ( 1) 00:25:24.412 22.917 - 23.029: 99.7383% ( 1) 00:25:24.412 23.029 - 23.141: 99.7514% ( 1) 00:25:24.412 23.364 - 23.476: 99.7645% ( 1) 00:25:24.412 23.588 - 23.700: 99.7906% ( 2) 00:25:24.412 23.700 - 23.811: 99.8037% ( 1) 00:25:24.412 23.811 - 23.923: 99.8168% ( 1) 00:25:24.412 23.923 - 24.035: 99.8430% ( 2) 00:25:24.412 24.147 - 24.259: 99.8561% ( 1) 00:25:24.412 24.259 - 24.370: 99.8691% ( 1) 00:25:24.412 25.153 - 25.265: 99.8822% ( 1) 00:25:24.412 26.047 - 26.159: 99.8953% ( 1) 00:25:24.412 26.271 - 26.383: 99.9084% ( 1) 00:25:24.412 27.948 - 28.059: 99.9215% ( 1) 00:25:24.412 28.059 - 28.171: 99.9346% ( 1) 00:25:24.412 28.842 - 29.066: 99.9477% ( 1) 00:25:24.412 33.090 - 33.314: 99.9607% ( 1) 00:25:24.412 37.338 - 37.562: 99.9738% ( 1) 00:25:24.412 126.100 - 126.994: 99.9869% ( 1) 00:25:24.412 1223.434 - 1230.589: 100.0000% ( 1) 00:25:24.412 00:25:24.412 00:25:24.412 real 0m1.293s 00:25:24.412 user 0m1.086s 00:25:24.412 sys 0m0.156s 00:25:24.412 ************************************ 00:25:24.412 END TEST nvme_overhead 00:25:24.412 ************************************ 00:25:24.412 13:45:31 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.413 13:45:31 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 13:45:31 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:25:24.413 13:45:31 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:25:24.413 13:45:31 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.413 13:45:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:24.413 ************************************ 00:25:24.413 START TEST nvme_arbitration 00:25:24.413 ************************************ 00:25:24.413 13:45:31 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:25:27.736 Initializing NVMe Controllers 00:25:27.736 Attached to 0000:00:10.0 00:25:27.736 Attached to 0000:00:11.0 00:25:27.736 Attached to 0000:00:13.0 00:25:27.736 Attached to 0000:00:12.0 00:25:27.736 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:25:27.736 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:25:27.736 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:25:27.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:25:27.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:25:27.736 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:25:27.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:25:27.736 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:25:27.736 Initialization complete. Launching workers. 00:25:27.736 Starting thread on core 1 with urgent priority queue 00:25:27.736 Starting thread on core 2 with urgent priority queue 00:25:27.736 Starting thread on core 3 with urgent priority queue 00:25:27.736 Starting thread on core 0 with urgent priority queue 00:25:27.736 QEMU NVMe Ctrl (12340 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:25:27.736 QEMU NVMe Ctrl (12342 ) core 0: 576.00 IO/s 173.61 secs/100000 ios 00:25:27.736 QEMU NVMe Ctrl (12341 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:25:27.736 QEMU NVMe Ctrl (12342 ) core 1: 576.00 IO/s 173.61 secs/100000 ios 00:25:27.736 QEMU NVMe Ctrl (12343 ) core 2: 405.33 IO/s 246.71 secs/100000 ios 00:25:27.736 QEMU NVMe Ctrl (12342 ) core 3: 405.33 IO/s 246.71 secs/100000 ios 00:25:27.736 ======================================================== 00:25:27.736 00:25:27.736 00:25:27.736 real 0m3.436s 00:25:27.736 user 0m9.402s 00:25:27.736 sys 0m0.167s 00:25:27.736 13:45:35 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.736 13:45:35 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:25:27.736 ************************************ 00:25:27.736 END TEST nvme_arbitration 00:25:27.736 ************************************ 00:25:27.736 13:45:35 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:25:27.736 13:45:35 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:27.736 13:45:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.736 13:45:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:27.736 ************************************ 00:25:27.736 START TEST nvme_single_aen 00:25:27.736 ************************************ 00:25:27.736 13:45:35 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:25:27.996 Asynchronous Event Request test 00:25:27.996 Attached to 0000:00:10.0 00:25:27.996 Attached to 0000:00:11.0 00:25:27.996 Attached to 0000:00:13.0 00:25:27.996 Attached to 0000:00:12.0 00:25:27.996 Reset controller to setup AER completions for this process 00:25:27.996 Registering asynchronous event callbacks... 00:25:27.996 Getting orig temperature thresholds of all controllers 00:25:27.996 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:27.996 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:27.996 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:27.996 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:25:27.996 Setting all controllers temperature threshold low to trigger AER 00:25:27.996 Waiting for all controllers temperature threshold to be set lower 00:25:27.996 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:27.996 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:25:27.996 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:27.996 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:25:27.996 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:27.996 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:25:27.996 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:25:27.996 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:25:27.996 Waiting for all controllers to trigger AER and reset threshold 00:25:27.996 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:27.996 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:27.996 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:27.996 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:25:27.996 Cleaning up... 00:25:27.996 ************************************ 00:25:27.996 END TEST nvme_single_aen 00:25:27.997 ************************************ 00:25:27.997 00:25:27.997 real 0m0.291s 00:25:27.997 user 0m0.102s 00:25:27.997 sys 0m0.142s 00:25:27.997 13:45:35 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:27.997 13:45:35 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:25:27.997 13:45:35 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:25:27.997 13:45:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:27.997 13:45:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:27.997 13:45:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:25:28.256 ************************************ 00:25:28.256 START TEST nvme_doorbell_aers 00:25:28.256 ************************************ 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:25:28.256 13:45:35 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:25:28.515 [2024-11-20 13:45:36.132592] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:25:38.499 Executing: test_write_invalid_db 00:25:38.499 Waiting for AER completion... 00:25:38.499 Failure: test_write_invalid_db 00:25:38.499 00:25:38.499 Executing: test_invalid_db_write_overflow_sq 00:25:38.499 Waiting for AER completion... 00:25:38.499 Failure: test_invalid_db_write_overflow_sq 00:25:38.499 00:25:38.499 Executing: test_invalid_db_write_overflow_cq 00:25:38.499 Waiting for AER completion... 00:25:38.499 Failure: test_invalid_db_write_overflow_cq 00:25:38.499 00:25:38.499 13:45:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:25:38.499 13:45:45 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:25:38.758 [2024-11-20 13:45:46.222977] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:25:48.751 Executing: test_write_invalid_db 00:25:48.751 Waiting for AER completion... 00:25:48.751 Failure: test_write_invalid_db 00:25:48.751 00:25:48.751 Executing: test_invalid_db_write_overflow_sq 00:25:48.751 Waiting for AER completion... 00:25:48.751 Failure: test_invalid_db_write_overflow_sq 00:25:48.751 00:25:48.751 Executing: test_invalid_db_write_overflow_cq 00:25:48.751 Waiting for AER completion... 00:25:48.751 Failure: test_invalid_db_write_overflow_cq 00:25:48.751 00:25:48.751 13:45:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:25:48.751 13:45:55 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:25:48.751 [2024-11-20 13:45:56.254383] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:25:58.782 Executing: test_write_invalid_db 00:25:58.782 Waiting for AER completion... 00:25:58.782 Failure: test_write_invalid_db 00:25:58.782 00:25:58.782 Executing: test_invalid_db_write_overflow_sq 00:25:58.782 Waiting for AER completion... 00:25:58.782 Failure: test_invalid_db_write_overflow_sq 00:25:58.782 00:25:58.782 Executing: test_invalid_db_write_overflow_cq 00:25:58.782 Waiting for AER completion... 00:25:58.782 Failure: test_invalid_db_write_overflow_cq 00:25:58.782 00:25:58.782 13:46:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:25:58.782 13:46:06 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:25:58.782 [2024-11-20 13:46:06.336520] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.776 Executing: test_write_invalid_db 00:26:08.776 Waiting for AER completion... 00:26:08.776 Failure: test_write_invalid_db 00:26:08.776 00:26:08.776 Executing: test_invalid_db_write_overflow_sq 00:26:08.776 Waiting for AER completion... 00:26:08.776 Failure: test_invalid_db_write_overflow_sq 00:26:08.776 00:26:08.776 Executing: test_invalid_db_write_overflow_cq 00:26:08.776 Waiting for AER completion... 00:26:08.776 Failure: test_invalid_db_write_overflow_cq 00:26:08.776 00:26:08.776 ************************************ 00:26:08.776 END TEST nvme_doorbell_aers 00:26:08.777 ************************************ 00:26:08.777 00:26:08.777 real 0m40.335s 00:26:08.777 user 0m33.390s 00:26:08.777 sys 0m6.540s 00:26:08.777 13:46:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:08.777 13:46:16 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:26:08.777 13:46:16 nvme -- nvme/nvme.sh@97 -- # uname 00:26:08.777 13:46:16 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:26:08.777 13:46:16 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:26:08.777 13:46:16 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:26:08.777 13:46:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:08.777 13:46:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:08.777 ************************************ 00:26:08.777 START TEST nvme_multi_aen 00:26:08.777 ************************************ 00:26:08.777 13:46:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:26:08.777 [2024-11-20 13:46:16.401626] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.401848] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.401904] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.403697] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.403811] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.403870] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.405329] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.405419] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.405435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.406732] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.406767] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 [2024-11-20 13:46:16.406779] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64941) is not found. Dropping the request. 00:26:08.777 Child process pid: 65458 00:26:09.037 [Child] Asynchronous Event Request test 00:26:09.037 [Child] Attached to 0000:00:10.0 00:26:09.037 [Child] Attached to 0000:00:11.0 00:26:09.037 [Child] Attached to 0000:00:13.0 00:26:09.037 [Child] Attached to 0000:00:12.0 00:26:09.037 [Child] Registering asynchronous event callbacks... 00:26:09.037 [Child] Getting orig temperature thresholds of all controllers 00:26:09.037 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 [Child] Waiting for all controllers to trigger AER and reset threshold 00:26:09.037 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 [Child] Cleaning up... 00:26:09.037 Asynchronous Event Request test 00:26:09.037 Attached to 0000:00:10.0 00:26:09.037 Attached to 0000:00:11.0 00:26:09.037 Attached to 0000:00:13.0 00:26:09.037 Attached to 0000:00:12.0 00:26:09.037 Reset controller to setup AER completions for this process 00:26:09.037 Registering asynchronous event callbacks... 00:26:09.037 Getting orig temperature thresholds of all controllers 00:26:09.037 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:26:09.037 Setting all controllers temperature threshold low to trigger AER 00:26:09.037 Waiting for all controllers temperature threshold to be set lower 00:26:09.037 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:26:09.037 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:26:09.037 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:26:09.037 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:26:09.037 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:26:09.037 Waiting for all controllers to trigger AER and reset threshold 00:26:09.037 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:26:09.037 Cleaning up... 00:26:09.297 00:26:09.297 real 0m0.624s 00:26:09.297 user 0m0.217s 00:26:09.297 sys 0m0.297s 00:26:09.297 13:46:16 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.297 13:46:16 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:26:09.297 ************************************ 00:26:09.297 END TEST nvme_multi_aen 00:26:09.297 ************************************ 00:26:09.297 13:46:16 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:09.297 13:46:16 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:26:09.297 13:46:16 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.297 13:46:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:09.297 ************************************ 00:26:09.297 START TEST nvme_startup 00:26:09.297 ************************************ 00:26:09.297 13:46:16 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:26:09.556 Initializing NVMe Controllers 00:26:09.556 Attached to 0000:00:10.0 00:26:09.556 Attached to 0000:00:11.0 00:26:09.556 Attached to 0000:00:13.0 00:26:09.556 Attached to 0000:00:12.0 00:26:09.556 Initialization complete. 00:26:09.556 Time used:193133.703 (us). 00:26:09.556 00:26:09.556 real 0m0.301s 00:26:09.556 user 0m0.105s 00:26:09.556 sys 0m0.147s 00:26:09.556 13:46:17 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:09.556 13:46:17 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:26:09.556 ************************************ 00:26:09.556 END TEST nvme_startup 00:26:09.556 ************************************ 00:26:09.556 13:46:17 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:26:09.556 13:46:17 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:09.556 13:46:17 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:09.556 13:46:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:09.556 ************************************ 00:26:09.556 START TEST nvme_multi_secondary 00:26:09.556 ************************************ 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65514 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65515 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:26:09.556 13:46:17 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:26:12.841 Initializing NVMe Controllers 00:26:12.841 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:12.841 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:12.841 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:12.841 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:12.841 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:26:12.841 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:26:12.841 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:26:12.841 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:26:12.841 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:26:12.841 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:26:12.841 Initialization complete. Launching workers. 00:26:12.841 ======================================================== 00:26:12.841 Latency(us) 00:26:12.841 Device Information : IOPS MiB/s Average min max 00:26:12.841 PCIE (0000:00:10.0) NSID 1 from core 2: 3061.02 11.96 5220.06 1000.29 15435.08 00:26:12.841 PCIE (0000:00:11.0) NSID 1 from core 2: 3061.02 11.96 5219.89 1078.69 19653.34 00:26:12.841 PCIE (0000:00:13.0) NSID 1 from core 2: 3061.02 11.96 5219.95 1065.03 19359.67 00:26:12.841 PCIE (0000:00:12.0) NSID 1 from core 2: 3061.02 11.96 5219.80 1048.75 19321.74 00:26:12.841 PCIE (0000:00:12.0) NSID 2 from core 2: 3061.02 11.96 5219.92 1040.20 20893.69 00:26:12.841 PCIE (0000:00:12.0) NSID 3 from core 2: 3061.02 11.96 5219.81 1021.96 15849.80 00:26:12.841 ======================================================== 00:26:12.841 Total : 18366.09 71.74 5219.90 1000.29 20893.69 00:26:12.841 00:26:13.100 13:46:20 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65514 00:26:13.100 Initializing NVMe Controllers 00:26:13.100 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:13.100 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:13.100 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:13.100 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:13.100 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:26:13.100 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:26:13.100 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:26:13.100 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:26:13.100 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:26:13.100 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:26:13.100 Initialization complete. Launching workers. 00:26:13.100 ======================================================== 00:26:13.100 Latency(us) 00:26:13.100 Device Information : IOPS MiB/s Average min max 00:26:13.100 PCIE (0000:00:10.0) NSID 1 from core 1: 6059.26 23.67 2638.31 1137.50 6358.01 00:26:13.100 PCIE (0000:00:11.0) NSID 1 from core 1: 6059.26 23.67 2639.99 1229.73 6204.68 00:26:13.100 PCIE (0000:00:13.0) NSID 1 from core 1: 6059.26 23.67 2640.02 1183.65 5935.01 00:26:13.100 PCIE (0000:00:12.0) NSID 1 from core 1: 6059.26 23.67 2639.99 1209.41 6018.72 00:26:13.100 PCIE (0000:00:12.0) NSID 2 from core 1: 6059.26 23.67 2640.05 1219.61 6419.42 00:26:13.100 PCIE (0000:00:12.0) NSID 3 from core 1: 6059.26 23.67 2640.12 1189.84 6253.77 00:26:13.100 ======================================================== 00:26:13.100 Total : 36355.58 142.01 2639.75 1137.50 6419.42 00:26:13.100 00:26:15.004 Initializing NVMe Controllers 00:26:15.004 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:15.004 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:15.004 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:15.004 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:15.004 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:15.004 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:26:15.004 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:26:15.004 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:26:15.004 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:26:15.004 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:26:15.004 Initialization complete. Launching workers. 00:26:15.004 ======================================================== 00:26:15.004 Latency(us) 00:26:15.004 Device Information : IOPS MiB/s Average min max 00:26:15.004 PCIE (0000:00:10.0) NSID 1 from core 0: 9080.33 35.47 1760.40 826.75 7461.54 00:26:15.004 PCIE (0000:00:11.0) NSID 1 from core 0: 9080.33 35.47 1761.56 842.73 7840.17 00:26:15.004 PCIE (0000:00:13.0) NSID 1 from core 0: 9080.33 35.47 1761.53 830.82 7791.26 00:26:15.004 PCIE (0000:00:12.0) NSID 1 from core 0: 9080.33 35.47 1761.49 827.02 7897.63 00:26:15.004 PCIE (0000:00:12.0) NSID 2 from core 0: 9080.33 35.47 1761.47 834.90 7969.82 00:26:15.004 PCIE (0000:00:12.0) NSID 3 from core 0: 9080.33 35.47 1761.44 788.83 8057.21 00:26:15.004 ======================================================== 00:26:15.004 Total : 54481.95 212.82 1761.31 788.83 8057.21 00:26:15.004 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65515 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65584 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65585 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:26:15.004 13:46:22 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:26:18.294 Initializing NVMe Controllers 00:26:18.294 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:18.294 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:18.294 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:18.294 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:18.294 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:18.294 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:26:18.294 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:26:18.294 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:26:18.294 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:26:18.294 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:26:18.294 Initialization complete. Launching workers. 00:26:18.294 ======================================================== 00:26:18.294 Latency(us) 00:26:18.294 Device Information : IOPS MiB/s Average min max 00:26:18.294 PCIE (0000:00:10.0) NSID 1 from core 0: 6361.41 24.85 2513.07 833.92 6927.34 00:26:18.294 PCIE (0000:00:11.0) NSID 1 from core 0: 6361.41 24.85 2514.65 843.72 6740.35 00:26:18.294 PCIE (0000:00:13.0) NSID 1 from core 0: 6361.41 24.85 2514.72 870.10 6752.13 00:26:18.294 PCIE (0000:00:12.0) NSID 1 from core 0: 6361.41 24.85 2514.78 877.60 7031.75 00:26:18.294 PCIE (0000:00:12.0) NSID 2 from core 0: 6361.41 24.85 2514.80 846.85 7102.53 00:26:18.294 PCIE (0000:00:12.0) NSID 3 from core 0: 6361.41 24.85 2514.96 849.48 7058.13 00:26:18.294 ======================================================== 00:26:18.294 Total : 38168.44 149.10 2514.50 833.92 7102.53 00:26:18.294 00:26:18.553 Initializing NVMe Controllers 00:26:18.553 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:18.553 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:18.553 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:18.553 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:18.553 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:26:18.553 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:26:18.553 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:26:18.553 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:26:18.553 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:26:18.553 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:26:18.553 Initialization complete. Launching workers. 00:26:18.553 ======================================================== 00:26:18.553 Latency(us) 00:26:18.553 Device Information : IOPS MiB/s Average min max 00:26:18.553 PCIE (0000:00:10.0) NSID 1 from core 1: 5949.40 23.24 2686.87 855.26 6715.18 00:26:18.553 PCIE (0000:00:11.0) NSID 1 from core 1: 5949.40 23.24 2688.56 870.42 6880.95 00:26:18.553 PCIE (0000:00:13.0) NSID 1 from core 1: 5949.40 23.24 2688.38 859.83 6928.82 00:26:18.553 PCIE (0000:00:12.0) NSID 1 from core 1: 5949.40 23.24 2688.21 860.98 7207.85 00:26:18.553 PCIE (0000:00:12.0) NSID 2 from core 1: 5949.40 23.24 2688.02 868.87 7320.23 00:26:18.553 PCIE (0000:00:12.0) NSID 3 from core 1: 5949.40 23.24 2687.97 882.95 6715.30 00:26:18.553 ======================================================== 00:26:18.553 Total : 35696.42 139.44 2688.00 855.26 7320.23 00:26:18.553 00:26:20.504 Initializing NVMe Controllers 00:26:20.504 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:20.504 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:26:20.504 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:26:20.504 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:26:20.504 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:26:20.504 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:26:20.504 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:26:20.504 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:26:20.504 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:26:20.504 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:26:20.504 Initialization complete. Launching workers. 00:26:20.504 ======================================================== 00:26:20.504 Latency(us) 00:26:20.504 Device Information : IOPS MiB/s Average min max 00:26:20.504 PCIE (0000:00:10.0) NSID 1 from core 2: 3425.56 13.38 4668.31 900.70 13280.55 00:26:20.504 PCIE (0000:00:11.0) NSID 1 from core 2: 3425.56 13.38 4670.13 935.38 13049.28 00:26:20.504 PCIE (0000:00:13.0) NSID 1 from core 2: 3425.56 13.38 4670.03 928.53 13933.97 00:26:20.504 PCIE (0000:00:12.0) NSID 1 from core 2: 3425.56 13.38 4669.69 940.23 16729.99 00:26:20.504 PCIE (0000:00:12.0) NSID 2 from core 2: 3425.56 13.38 4669.60 937.75 13356.09 00:26:20.504 PCIE (0000:00:12.0) NSID 3 from core 2: 3425.56 13.38 4669.97 938.27 13312.93 00:26:20.504 ======================================================== 00:26:20.504 Total : 20553.39 80.29 4669.62 900.70 16729.99 00:26:20.504 00:26:20.504 ************************************ 00:26:20.504 END TEST nvme_multi_secondary 00:26:20.504 ************************************ 00:26:20.504 13:46:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65584 00:26:20.504 13:46:28 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65585 00:26:20.504 00:26:20.504 real 0m10.938s 00:26:20.504 user 0m18.536s 00:26:20.504 sys 0m1.033s 00:26:20.504 13:46:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:20.504 13:46:28 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:26:20.504 13:46:28 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:26:20.504 13:46:28 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:26:20.504 13:46:28 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64526 ]] 00:26:20.504 13:46:28 nvme -- common/autotest_common.sh@1094 -- # kill 64526 00:26:20.504 13:46:28 nvme -- common/autotest_common.sh@1095 -- # wait 64526 00:26:20.504 [2024-11-20 13:46:28.191062] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.191210] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.191353] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.191467] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.199348] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.199425] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.199450] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.504 [2024-11-20 13:46:28.199477] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.205053] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.205127] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.205152] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.205177] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.209930] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.209983] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.210000] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.505 [2024-11-20 13:46:28.210017] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65457) is not found. Dropping the request. 00:26:20.766 13:46:28 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:26:20.766 13:46:28 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:26:20.766 13:46:28 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:26:20.766 13:46:28 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:20.766 13:46:28 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:20.766 13:46:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:20.766 ************************************ 00:26:20.766 START TEST bdev_nvme_reset_stuck_adm_cmd 00:26:20.766 ************************************ 00:26:20.766 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:26:21.026 * Looking for test storage... 00:26:21.026 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lcov --version 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:21.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.026 --rc genhtml_branch_coverage=1 00:26:21.026 --rc genhtml_function_coverage=1 00:26:21.026 --rc genhtml_legend=1 00:26:21.026 --rc geninfo_all_blocks=1 00:26:21.026 --rc geninfo_unexecuted_blocks=1 00:26:21.026 00:26:21.026 ' 00:26:21.026 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:21.026 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.026 --rc genhtml_branch_coverage=1 00:26:21.026 --rc genhtml_function_coverage=1 00:26:21.026 --rc genhtml_legend=1 00:26:21.026 --rc geninfo_all_blocks=1 00:26:21.026 --rc geninfo_unexecuted_blocks=1 00:26:21.026 00:26:21.026 ' 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.027 --rc genhtml_branch_coverage=1 00:26:21.027 --rc genhtml_function_coverage=1 00:26:21.027 --rc genhtml_legend=1 00:26:21.027 --rc geninfo_all_blocks=1 00:26:21.027 --rc geninfo_unexecuted_blocks=1 00:26:21.027 00:26:21.027 ' 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:21.027 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:21.027 --rc genhtml_branch_coverage=1 00:26:21.027 --rc genhtml_function_coverage=1 00:26:21.027 --rc genhtml_legend=1 00:26:21.027 --rc geninfo_all_blocks=1 00:26:21.027 --rc geninfo_unexecuted_blocks=1 00:26:21.027 00:26:21.027 ' 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:21.027 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65751 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65751 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65751 ']' 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:21.285 13:46:28 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:21.285 [2024-11-20 13:46:28.870832] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:26:21.285 [2024-11-20 13:46:28.870989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65751 ] 00:26:21.545 [2024-11-20 13:46:29.064130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:21.545 [2024-11-20 13:46:29.196062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:21.545 [2024-11-20 13:46:29.196225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.545 [2024-11-20 13:46:29.196862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.545 [2024-11-20 13:46:29.196904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:22.481 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:22.481 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:26:22.481 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:26:22.481 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.481 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:22.741 nvme0n1 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_RiFCy.txt 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:22.741 true 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1732110390 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65780 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:26:22.741 13:46:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:24.660 [2024-11-20 13:46:32.308463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:26:24.660 [2024-11-20 13:46:32.310211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.660 [2024-11-20 13:46:32.310333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:26:24.660 [2024-11-20 13:46:32.310406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.660 [2024-11-20 13:46:32.312645] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65780 00:26:24.660 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65780 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65780 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:26:24.660 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_RiFCy.txt 00:26:24.931 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:26:24.931 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:26:24.931 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_RiFCy.txt 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65751 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65751 ']' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65751 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65751 00:26:24.932 killing process with pid 65751 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65751' 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65751 00:26:24.932 13:46:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65751 00:26:28.258 13:46:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:26:28.258 13:46:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:26:28.258 00:26:28.258 real 0m6.862s 00:26:28.258 user 0m24.013s 00:26:28.258 sys 0m0.811s 00:26:28.258 ************************************ 00:26:28.258 END TEST bdev_nvme_reset_stuck_adm_cmd 00:26:28.258 ************************************ 00:26:28.258 13:46:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:28.258 13:46:35 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 13:46:35 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:26:28.258 13:46:35 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:26:28.258 13:46:35 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:28.258 13:46:35 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:28.258 13:46:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:26:28.258 ************************************ 00:26:28.258 START TEST nvme_fio 00:26:28.258 ************************************ 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:26:28.258 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:28.258 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:28.517 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:26:28.517 13:46:35 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:26:28.517 13:46:35 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:28.517 13:46:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:28.517 13:46:36 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:28.517 13:46:36 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:26:28.517 13:46:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:28.517 13:46:36 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:26:28.777 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:28.777 fio-3.35 00:26:28.777 Starting 1 thread 00:26:34.048 00:26:34.048 test: (groupid=0, jobs=1): err= 0: pid=65938: Wed Nov 20 13:46:41 2024 00:26:34.048 read: IOPS=21.6k, BW=84.5MiB/s (88.6MB/s)(169MiB/2001msec) 00:26:34.048 slat (nsec): min=4404, max=59398, avg=5416.71, stdev=1385.84 00:26:34.048 clat (usec): min=242, max=11017, avg=2950.86, stdev=496.43 00:26:34.048 lat (usec): min=247, max=11062, avg=2956.28, stdev=497.25 00:26:34.048 clat percentiles (usec): 00:26:34.048 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:26:34.048 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:26:34.048 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:26:34.048 | 99.00th=[ 5407], 99.50th=[ 7046], 99.90th=[ 8979], 99.95th=[ 9241], 00:26:34.048 | 99.99th=[10683] 00:26:34.048 bw ( KiB/s): min=84504, max=87848, per=99.36%, avg=85957.33, stdev=1714.36, samples=3 00:26:34.048 iops : min=21126, max=21962, avg=21489.33, stdev=428.59, samples=3 00:26:34.048 write: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec); 0 zone resets 00:26:34.048 slat (nsec): min=4525, max=58873, avg=5728.74, stdev=1373.57 00:26:34.048 clat (usec): min=260, max=10831, avg=2960.24, stdev=515.52 00:26:34.048 lat (usec): min=266, max=10847, avg=2965.96, stdev=516.32 00:26:34.048 clat percentiles (usec): 00:26:34.048 | 1.00th=[ 2671], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:26:34.048 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2933], 00:26:34.048 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3097], 00:26:34.048 | 99.00th=[ 5866], 99.50th=[ 7308], 99.90th=[ 9110], 99.95th=[ 9372], 00:26:34.048 | 99.99th=[10552] 00:26:34.048 bw ( KiB/s): min=84416, max=87600, per=100.00%, avg=86101.33, stdev=1600.19, samples=3 00:26:34.048 iops : min=21104, max=21900, avg=21525.33, stdev=400.05, samples=3 00:26:34.048 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:26:34.048 lat (msec) : 2=0.05%, 4=98.07%, 10=1.82%, 20=0.02% 00:26:34.048 cpu : usr=99.20%, sys=0.15%, ctx=3, majf=0, minf=607 00:26:34.048 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:34.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:34.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:34.048 issued rwts: total=43278,42952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:34.048 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:34.048 00:26:34.048 Run status group 0 (all jobs): 00:26:34.048 READ: bw=84.5MiB/s (88.6MB/s), 84.5MiB/s-84.5MiB/s (88.6MB/s-88.6MB/s), io=169MiB (177MB), run=2001-2001msec 00:26:34.048 WRITE: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:26:34.307 ----------------------------------------------------- 00:26:34.307 Suppressions used: 00:26:34.307 count bytes template 00:26:34.307 1 32 /usr/src/fio/parse.c 00:26:34.307 1 8 libtcmalloc_minimal.so 00:26:34.307 ----------------------------------------------------- 00:26:34.307 00:26:34.307 13:46:41 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:26:34.307 13:46:41 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:34.307 13:46:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:26:34.307 13:46:41 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:34.574 13:46:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:34.574 13:46:42 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:26:34.834 13:46:42 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:26:34.834 13:46:42 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:26:34.834 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:35.092 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:35.092 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:35.092 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:26:35.092 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:35.092 13:46:42 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:26:35.092 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:35.092 fio-3.35 00:26:35.092 Starting 1 thread 00:26:41.661 00:26:41.661 test: (groupid=0, jobs=1): err= 0: pid=66026: Wed Nov 20 13:46:48 2024 00:26:41.661 read: IOPS=21.7k, BW=84.8MiB/s (88.9MB/s)(170MiB/2001msec) 00:26:41.662 slat (nsec): min=4327, max=59376, avg=5282.82, stdev=1183.88 00:26:41.662 clat (usec): min=246, max=10819, avg=2942.43, stdev=297.58 00:26:41.662 lat (usec): min=251, max=10878, avg=2947.71, stdev=297.99 00:26:41.662 clat percentiles (usec): 00:26:41.662 | 1.00th=[ 2540], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:26:41.662 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:26:41.662 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3130], 00:26:41.662 | 99.00th=[ 3785], 99.50th=[ 5014], 99.90th=[ 5932], 99.95th=[ 8094], 00:26:41.662 | 99.99th=[10421] 00:26:41.662 bw ( KiB/s): min=86144, max=86824, per=99.64%, avg=86490.67, stdev=340.20, samples=3 00:26:41.662 iops : min=21536, max=21706, avg=21622.67, stdev=85.05, samples=3 00:26:41.662 write: IOPS=21.5k, BW=84.1MiB/s (88.2MB/s)(168MiB/2001msec); 0 zone resets 00:26:41.662 slat (nsec): min=4502, max=46493, avg=5686.15, stdev=1235.13 00:26:41.662 clat (usec): min=221, max=10607, avg=2946.89, stdev=298.02 00:26:41.662 lat (usec): min=227, max=10631, avg=2952.58, stdev=298.44 00:26:41.662 clat percentiles (usec): 00:26:41.662 | 1.00th=[ 2540], 5.00th=[ 2769], 10.00th=[ 2802], 20.00th=[ 2835], 00:26:41.662 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:26:41.662 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3163], 00:26:41.662 | 99.00th=[ 3949], 99.50th=[ 4948], 99.90th=[ 6194], 99.95th=[ 8291], 00:26:41.662 | 99.99th=[10159] 00:26:41.662 bw ( KiB/s): min=85760, max=87504, per=100.00%, avg=86698.67, stdev=879.61, samples=3 00:26:41.662 iops : min=21440, max=21876, avg=21674.67, stdev=219.90, samples=3 00:26:41.662 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:26:41.662 lat (msec) : 2=0.38%, 4=98.64%, 10=0.94%, 20=0.01% 00:26:41.662 cpu : usr=99.30%, sys=0.05%, ctx=7, majf=0, minf=607 00:26:41.662 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:41.662 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:41.662 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:41.662 issued rwts: total=43423,43103,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:41.662 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:41.662 00:26:41.662 Run status group 0 (all jobs): 00:26:41.662 READ: bw=84.8MiB/s (88.9MB/s), 84.8MiB/s-84.8MiB/s (88.9MB/s-88.9MB/s), io=170MiB (178MB), run=2001-2001msec 00:26:41.662 WRITE: bw=84.1MiB/s (88.2MB/s), 84.1MiB/s-84.1MiB/s (88.2MB/s-88.2MB/s), io=168MiB (177MB), run=2001-2001msec 00:26:41.662 ----------------------------------------------------- 00:26:41.662 Suppressions used: 00:26:41.662 count bytes template 00:26:41.662 1 32 /usr/src/fio/parse.c 00:26:41.662 1 8 libtcmalloc_minimal.so 00:26:41.662 ----------------------------------------------------- 00:26:41.662 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:26:41.662 13:46:48 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:41.662 13:46:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:26:41.662 13:46:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:41.662 13:46:49 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:26:41.921 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:41.921 fio-3.35 00:26:41.921 Starting 1 thread 00:26:48.483 00:26:48.483 test: (groupid=0, jobs=1): err= 0: pid=66114: Wed Nov 20 13:46:55 2024 00:26:48.483 read: IOPS=21.6k, BW=84.3MiB/s (88.4MB/s)(169MiB/2001msec) 00:26:48.483 slat (nsec): min=4323, max=59553, avg=5313.39, stdev=1422.71 00:26:48.483 clat (usec): min=231, max=11794, avg=2954.81, stdev=517.92 00:26:48.483 lat (usec): min=236, max=11854, avg=2960.12, stdev=518.83 00:26:48.483 clat percentiles (usec): 00:26:48.483 | 1.00th=[ 2474], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:26:48.483 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:26:48.483 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3195], 00:26:48.483 | 99.00th=[ 5538], 99.50th=[ 7046], 99.90th=[ 8848], 99.95th=[ 8979], 00:26:48.483 | 99.99th=[11469] 00:26:48.483 bw ( KiB/s): min=81964, max=87848, per=98.54%, avg=85073.33, stdev=2956.24, samples=3 00:26:48.483 iops : min=20491, max=21962, avg=21268.33, stdev=739.06, samples=3 00:26:48.483 write: IOPS=21.4k, BW=83.7MiB/s (87.7MB/s)(167MiB/2001msec); 0 zone resets 00:26:48.483 slat (nsec): min=4442, max=69034, avg=5688.81, stdev=1460.51 00:26:48.483 clat (usec): min=258, max=11595, avg=2964.76, stdev=549.58 00:26:48.483 lat (usec): min=264, max=11618, avg=2970.45, stdev=550.48 00:26:48.483 clat percentiles (usec): 00:26:48.483 | 1.00th=[ 2442], 5.00th=[ 2737], 10.00th=[ 2769], 20.00th=[ 2802], 00:26:48.483 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2900], 60.00th=[ 2900], 00:26:48.483 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3064], 95.00th=[ 3195], 00:26:48.483 | 99.00th=[ 5997], 99.50th=[ 7635], 99.90th=[ 8848], 99.95th=[ 9110], 00:26:48.483 | 99.99th=[11207] 00:26:48.483 bw ( KiB/s): min=81652, max=88264, per=99.46%, avg=85233.33, stdev=3340.22, samples=3 00:26:48.483 iops : min=20413, max=22066, avg=21308.33, stdev=835.05, samples=3 00:26:48.483 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:26:48.483 lat (msec) : 2=0.40%, 4=97.47%, 10=2.06%, 20=0.03% 00:26:48.483 cpu : usr=99.15%, sys=0.25%, ctx=5, majf=0, minf=607 00:26:48.483 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:26:48.483 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.483 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.483 issued rwts: total=43187,42868,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.483 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.483 00:26:48.483 Run status group 0 (all jobs): 00:26:48.483 READ: bw=84.3MiB/s (88.4MB/s), 84.3MiB/s-84.3MiB/s (88.4MB/s-88.4MB/s), io=169MiB (177MB), run=2001-2001msec 00:26:48.483 WRITE: bw=83.7MiB/s (87.7MB/s), 83.7MiB/s-83.7MiB/s (87.7MB/s-87.7MB/s), io=167MiB (176MB), run=2001-2001msec 00:26:48.483 ----------------------------------------------------- 00:26:48.483 Suppressions used: 00:26:48.483 count bytes template 00:26:48.483 1 32 /usr/src/fio/parse.c 00:26:48.483 1 8 libtcmalloc_minimal.so 00:26:48.483 ----------------------------------------------------- 00:26:48.483 00:26:48.483 13:46:55 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:26:48.483 13:46:55 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:26:48.483 13:46:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:26:48.483 13:46:55 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:26:48.483 13:46:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:26:48.483 13:46:56 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:26:48.741 13:46:56 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:26:48.741 13:46:56 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:26:48.741 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:26:48.742 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:48.742 13:46:56 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:26:49.001 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:49.001 fio-3.35 00:26:49.001 Starting 1 thread 00:27:01.212 00:27:01.212 test: (groupid=0, jobs=1): err= 0: pid=66208: Wed Nov 20 13:47:07 2024 00:27:01.212 read: IOPS=22.0k, BW=85.8MiB/s (89.9MB/s)(172MiB/2001msec) 00:27:01.212 slat (nsec): min=4340, max=63243, avg=5335.27, stdev=1223.98 00:27:01.212 clat (usec): min=302, max=14646, avg=2907.30, stdev=395.29 00:27:01.212 lat (usec): min=307, max=14710, avg=2912.64, stdev=395.86 00:27:01.212 clat percentiles (usec): 00:27:01.212 | 1.00th=[ 2180], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:27:01.212 | 30.00th=[ 2835], 40.00th=[ 2835], 50.00th=[ 2868], 60.00th=[ 2900], 00:27:01.212 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3163], 00:27:01.212 | 99.00th=[ 3916], 99.50th=[ 4686], 99.90th=[ 8455], 99.95th=[10683], 00:27:01.212 | 99.99th=[14091] 00:27:01.212 bw ( KiB/s): min=81456, max=90296, per=98.88%, avg=86832.00, stdev=4719.98, samples=3 00:27:01.212 iops : min=20364, max=22574, avg=21708.00, stdev=1179.99, samples=3 00:27:01.212 write: IOPS=21.8k, BW=85.2MiB/s (89.3MB/s)(170MiB/2001msec); 0 zone resets 00:27:01.212 slat (nsec): min=4449, max=43503, avg=5687.51, stdev=1202.08 00:27:01.212 clat (usec): min=227, max=14340, avg=2912.40, stdev=403.76 00:27:01.213 lat (usec): min=233, max=14363, avg=2918.09, stdev=404.26 00:27:01.213 clat percentiles (usec): 00:27:01.213 | 1.00th=[ 2147], 5.00th=[ 2704], 10.00th=[ 2737], 20.00th=[ 2802], 00:27:01.213 | 30.00th=[ 2835], 40.00th=[ 2868], 50.00th=[ 2868], 60.00th=[ 2900], 00:27:01.213 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3032], 95.00th=[ 3195], 00:27:01.213 | 99.00th=[ 3982], 99.50th=[ 4686], 99.90th=[ 8586], 99.95th=[11207], 00:27:01.213 | 99.99th=[13829] 00:27:01.213 bw ( KiB/s): min=81320, max=90952, per=99.78%, avg=87037.33, stdev=5062.71, samples=3 00:27:01.213 iops : min=20330, max=22738, avg=21759.33, stdev=1265.68, samples=3 00:27:01.213 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:27:01.213 lat (msec) : 2=0.73%, 4=98.26%, 10=0.91%, 20=0.06% 00:27:01.213 cpu : usr=99.30%, sys=0.05%, ctx=6, majf=0, minf=605 00:27:01.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:01.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:01.213 issued rwts: total=43928,43636,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:01.213 00:27:01.213 Run status group 0 (all jobs): 00:27:01.213 READ: bw=85.8MiB/s (89.9MB/s), 85.8MiB/s-85.8MiB/s (89.9MB/s-89.9MB/s), io=172MiB (180MB), run=2001-2001msec 00:27:01.213 WRITE: bw=85.2MiB/s (89.3MB/s), 85.2MiB/s-85.2MiB/s (89.3MB/s-89.3MB/s), io=170MiB (179MB), run=2001-2001msec 00:27:01.213 ----------------------------------------------------- 00:27:01.213 Suppressions used: 00:27:01.213 count bytes template 00:27:01.213 1 32 /usr/src/fio/parse.c 00:27:01.213 1 8 libtcmalloc_minimal.so 00:27:01.213 ----------------------------------------------------- 00:27:01.213 00:27:01.213 13:47:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:27:01.213 13:47:07 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:27:01.213 00:27:01.213 real 0m32.603s 00:27:01.213 user 0m17.224s 00:27:01.213 sys 0m29.041s 00:27:01.213 ************************************ 00:27:01.213 END TEST nvme_fio 00:27:01.213 ************************************ 00:27:01.213 13:47:07 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.213 13:47:07 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 ************************************ 00:27:01.213 END TEST nvme 00:27:01.213 ************************************ 00:27:01.213 00:27:01.213 real 1m47.949s 00:27:01.213 user 3m51.549s 00:27:01.213 sys 0m43.174s 00:27:01.213 13:47:07 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.213 13:47:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 13:47:08 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:27:01.213 13:47:08 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:01.213 13:47:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:01.213 13:47:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.213 13:47:08 -- common/autotest_common.sh@10 -- # set +x 00:27:01.213 ************************************ 00:27:01.213 START TEST nvme_scc 00:27:01.213 ************************************ 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:27:01.213 * Looking for test storage... 00:27:01.213 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@345 -- # : 1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@368 -- # return 0 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.213 --rc genhtml_branch_coverage=1 00:27:01.213 --rc genhtml_function_coverage=1 00:27:01.213 --rc genhtml_legend=1 00:27:01.213 --rc geninfo_all_blocks=1 00:27:01.213 --rc geninfo_unexecuted_blocks=1 00:27:01.213 00:27:01.213 ' 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.213 --rc genhtml_branch_coverage=1 00:27:01.213 --rc genhtml_function_coverage=1 00:27:01.213 --rc genhtml_legend=1 00:27:01.213 --rc geninfo_all_blocks=1 00:27:01.213 --rc geninfo_unexecuted_blocks=1 00:27:01.213 00:27:01.213 ' 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.213 --rc genhtml_branch_coverage=1 00:27:01.213 --rc genhtml_function_coverage=1 00:27:01.213 --rc genhtml_legend=1 00:27:01.213 --rc geninfo_all_blocks=1 00:27:01.213 --rc geninfo_unexecuted_blocks=1 00:27:01.213 00:27:01.213 ' 00:27:01.213 13:47:08 nvme_scc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.213 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.213 --rc genhtml_branch_coverage=1 00:27:01.213 --rc genhtml_function_coverage=1 00:27:01.213 --rc genhtml_legend=1 00:27:01.213 --rc geninfo_all_blocks=1 00:27:01.213 --rc geninfo_unexecuted_blocks=1 00:27:01.213 00:27:01.213 ' 00:27:01.213 13:47:08 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.213 13:47:08 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.213 13:47:08 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.213 13:47:08 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.213 13:47:08 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.213 13:47:08 nvme_scc -- paths/export.sh@5 -- # export PATH 00:27:01.213 13:47:08 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:01.213 13:47:08 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:27:01.213 13:47:08 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.213 13:47:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:27:01.213 13:47:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:27:01.214 13:47:08 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:27:01.214 13:47:08 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:01.214 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:01.472 Waiting for block devices as requested 00:27:01.730 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.730 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.730 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:01.989 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:07.300 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:07.300 13:47:14 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:27:07.300 13:47:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:07.300 13:47:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:07.300 13:47:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:07.300 13:47:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.300 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:27:07.301 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:27:07.302 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.303 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.304 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.305 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:27:07.306 13:47:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:07.306 13:47:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:07.306 13:47:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:07.306 13:47:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:27:07.306 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.307 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:27:07.308 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.309 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:27:07.310 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:27:07.311 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.312 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:27:07.313 13:47:14 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:07.313 13:47:14 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:27:07.313 13:47:14 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:07.313 13:47:14 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.313 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:27:07.314 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:27:07.315 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.316 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.317 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:27:07.318 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:14 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.319 13:47:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.585 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.586 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:27:07.587 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:27:07.588 13:47:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:27:07.589 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:27:07.590 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:27:07.591 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:27:07.592 13:47:15 nvme_scc -- scripts/common.sh@18 -- # local i 00:27:07.592 13:47:15 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:27:07.592 13:47:15 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:07.592 13:47:15 nvme_scc -- scripts/common.sh@27 -- # return 0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@18 -- # shift 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.592 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.593 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.594 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:27:07.595 13:47:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:27:07.595 13:47:15 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:27:07.596 13:47:15 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:27:07.596 13:47:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:27:07.596 13:47:15 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:27:07.596 13:47:15 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:08.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:09.111 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:09.111 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:09.111 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:09.111 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:09.111 13:47:16 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:27:09.111 13:47:16 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:09.111 13:47:16 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.111 13:47:16 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:27:09.111 ************************************ 00:27:09.111 START TEST nvme_simple_copy 00:27:09.111 ************************************ 00:27:09.111 13:47:16 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:27:09.370 Initializing NVMe Controllers 00:27:09.370 Attaching to 0000:00:10.0 00:27:09.370 Controller supports SCC. Attached to 0000:00:10.0 00:27:09.370 Namespace ID: 1 size: 6GB 00:27:09.370 Initialization complete. 00:27:09.370 00:27:09.370 Controller QEMU NVMe Ctrl (12340 ) 00:27:09.370 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:27:09.370 Namespace Block Size:4096 00:27:09.370 Writing LBAs 0 to 63 with Random Data 00:27:09.370 Copied LBAs from 0 - 63 to the Destination LBA 256 00:27:09.370 LBAs matching Written Data: 64 00:27:09.370 00:27:09.370 real 0m0.400s 00:27:09.370 user 0m0.179s 00:27:09.370 sys 0m0.120s 00:27:09.370 13:47:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.370 13:47:17 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:27:09.370 ************************************ 00:27:09.370 END TEST nvme_simple_copy 00:27:09.370 ************************************ 00:27:09.628 00:27:09.628 real 0m9.073s 00:27:09.628 user 0m1.690s 00:27:09.628 sys 0m2.415s 00:27:09.628 13:47:17 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:09.628 13:47:17 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:27:09.628 ************************************ 00:27:09.628 END TEST nvme_scc 00:27:09.628 ************************************ 00:27:09.628 13:47:17 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:27:09.628 13:47:17 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:27:09.628 13:47:17 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:27:09.628 13:47:17 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:27:09.628 13:47:17 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:27:09.628 13:47:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:09.628 13:47:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.628 13:47:17 -- common/autotest_common.sh@10 -- # set +x 00:27:09.628 ************************************ 00:27:09.628 START TEST nvme_fdp 00:27:09.628 ************************************ 00:27:09.628 13:47:17 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:27:09.628 * Looking for test storage... 00:27:09.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:09.629 13:47:17 nvme_fdp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:09.629 13:47:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lcov --version 00:27:09.629 13:47:17 nvme_fdp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:09.886 13:47:17 nvme_fdp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:09.886 13:47:17 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:27:09.887 13:47:17 nvme_fdp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:09.887 13:47:17 nvme_fdp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.887 --rc genhtml_branch_coverage=1 00:27:09.887 --rc genhtml_function_coverage=1 00:27:09.887 --rc genhtml_legend=1 00:27:09.887 --rc geninfo_all_blocks=1 00:27:09.887 --rc geninfo_unexecuted_blocks=1 00:27:09.887 00:27:09.887 ' 00:27:09.887 13:47:17 nvme_fdp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.887 --rc genhtml_branch_coverage=1 00:27:09.887 --rc genhtml_function_coverage=1 00:27:09.887 --rc genhtml_legend=1 00:27:09.887 --rc geninfo_all_blocks=1 00:27:09.887 --rc geninfo_unexecuted_blocks=1 00:27:09.887 00:27:09.887 ' 00:27:09.887 13:47:17 nvme_fdp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.887 --rc genhtml_branch_coverage=1 00:27:09.887 --rc genhtml_function_coverage=1 00:27:09.887 --rc genhtml_legend=1 00:27:09.887 --rc geninfo_all_blocks=1 00:27:09.887 --rc geninfo_unexecuted_blocks=1 00:27:09.887 00:27:09.887 ' 00:27:09.887 13:47:17 nvme_fdp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:09.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:09.887 --rc genhtml_branch_coverage=1 00:27:09.887 --rc genhtml_function_coverage=1 00:27:09.887 --rc genhtml_legend=1 00:27:09.887 --rc geninfo_all_blocks=1 00:27:09.887 --rc geninfo_unexecuted_blocks=1 00:27:09.887 00:27:09.887 ' 00:27:09.887 13:47:17 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:09.887 13:47:17 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:09.887 13:47:17 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.887 13:47:17 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.887 13:47:17 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.887 13:47:17 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:27:09.887 13:47:17 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:27:09.887 13:47:17 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:27:09.887 13:47:17 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:09.887 13:47:17 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:10.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:10.453 Waiting for block devices as requested 00:27:10.711 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:10.711 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:10.711 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:10.969 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:16.270 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:16.270 13:47:23 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:27:16.270 13:47:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:27:16.270 13:47:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:16.270 13:47:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:16.270 13:47:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:27:16.270 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:27:16.271 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.272 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.273 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.274 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:27:16.275 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.276 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:27:16.277 13:47:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:27:16.277 13:47:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:16.277 13:47:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:16.277 13:47:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:27:16.277 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.278 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.279 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.280 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:27:16.281 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:27:16.282 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:27:16.283 13:47:23 nvme_fdp -- scripts/common.sh@18 -- # local i 00:27:16.283 13:47:23 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:27:16.283 13:47:23 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:16.283 13:47:23 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.283 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:27:16.284 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.285 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.286 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.287 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.288 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:27:16.289 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.290 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:27:16.291 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.554 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:23 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.555 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:27:16.556 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.557 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:27:16.558 13:47:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:27:16.559 13:47:24 nvme_fdp -- scripts/common.sh@18 -- # local i 00:27:16.559 13:47:24 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:27:16.559 13:47:24 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:16.559 13:47:24 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:27:16.559 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:27:16.560 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.561 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:27:16.562 13:47:24 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:27:16.562 13:47:24 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:27:16.562 13:47:24 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:17.129 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:18.067 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:27:18.067 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:18.067 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:18.067 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:27:18.067 13:47:25 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:27:18.067 13:47:25 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:18.067 13:47:25 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.067 13:47:25 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:27:18.067 ************************************ 00:27:18.067 START TEST nvme_flexible_data_placement 00:27:18.067 ************************************ 00:27:18.067 13:47:25 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:27:18.326 Initializing NVMe Controllers 00:27:18.326 Attaching to 0000:00:13.0 00:27:18.326 Controller supports FDP Attached to 0000:00:13.0 00:27:18.326 Namespace ID: 1 Endurance Group ID: 1 00:27:18.326 Initialization complete. 00:27:18.326 00:27:18.326 ================================== 00:27:18.326 == FDP tests for Namespace: #01 == 00:27:18.326 ================================== 00:27:18.326 00:27:18.326 Get Feature: FDP: 00:27:18.326 ================= 00:27:18.326 Enabled: Yes 00:27:18.326 FDP configuration Index: 0 00:27:18.326 00:27:18.326 FDP configurations log page 00:27:18.326 =========================== 00:27:18.326 Number of FDP configurations: 1 00:27:18.326 Version: 0 00:27:18.326 Size: 112 00:27:18.326 FDP Configuration Descriptor: 0 00:27:18.326 Descriptor Size: 96 00:27:18.326 Reclaim Group Identifier format: 2 00:27:18.326 FDP Volatile Write Cache: Not Present 00:27:18.326 FDP Configuration: Valid 00:27:18.326 Vendor Specific Size: 0 00:27:18.326 Number of Reclaim Groups: 2 00:27:18.326 Number of Recalim Unit Handles: 8 00:27:18.326 Max Placement Identifiers: 128 00:27:18.326 Number of Namespaces Suppprted: 256 00:27:18.326 Reclaim unit Nominal Size: 6000000 bytes 00:27:18.326 Estimated Reclaim Unit Time Limit: Not Reported 00:27:18.326 RUH Desc #000: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #001: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #002: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #003: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #004: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #005: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #006: RUH Type: Initially Isolated 00:27:18.326 RUH Desc #007: RUH Type: Initially Isolated 00:27:18.326 00:27:18.326 FDP reclaim unit handle usage log page 00:27:18.326 ====================================== 00:27:18.326 Number of Reclaim Unit Handles: 8 00:27:18.326 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:27:18.326 RUH Usage Desc #001: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #002: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #003: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #004: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #005: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #006: RUH Attributes: Unused 00:27:18.326 RUH Usage Desc #007: RUH Attributes: Unused 00:27:18.326 00:27:18.326 FDP statistics log page 00:27:18.326 ======================= 00:27:18.326 Host bytes with metadata written: 846131200 00:27:18.326 Media bytes with metadata written: 846209024 00:27:18.326 Media bytes erased: 0 00:27:18.326 00:27:18.326 FDP Reclaim unit handle status 00:27:18.327 ============================== 00:27:18.327 Number of RUHS descriptors: 2 00:27:18.327 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000003911 00:27:18.327 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:27:18.327 00:27:18.327 FDP write on placement id: 0 success 00:27:18.327 00:27:18.327 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:27:18.327 00:27:18.327 IO mgmt send: RUH update for Placement ID: #0 Success 00:27:18.327 00:27:18.327 Get Feature: FDP Events for Placement handle: #0 00:27:18.327 ======================== 00:27:18.327 Number of FDP Events: 6 00:27:18.327 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:27:18.327 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:27:18.327 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:27:18.327 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:27:18.327 FDP Event: #4 Type: Media Reallocated Enabled: No 00:27:18.327 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:27:18.327 00:27:18.327 FDP events log page 00:27:18.327 =================== 00:27:18.327 Number of FDP events: 1 00:27:18.327 FDP Event #0: 00:27:18.327 Event Type: RU Not Written to Capacity 00:27:18.327 Placement Identifier: Valid 00:27:18.327 NSID: Valid 00:27:18.327 Location: Valid 00:27:18.327 Placement Identifier: 0 00:27:18.327 Event Timestamp: 7 00:27:18.327 Namespace Identifier: 1 00:27:18.327 Reclaim Group Identifier: 0 00:27:18.327 Reclaim Unit Handle Identifier: 0 00:27:18.327 00:27:18.327 FDP test passed 00:27:18.327 00:27:18.327 real 0m0.288s 00:27:18.327 user 0m0.094s 00:27:18.327 sys 0m0.092s 00:27:18.327 13:47:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.327 13:47:26 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:27:18.327 ************************************ 00:27:18.327 END TEST nvme_flexible_data_placement 00:27:18.327 ************************************ 00:27:18.587 00:27:18.587 real 0m8.901s 00:27:18.587 user 0m1.643s 00:27:18.587 sys 0m2.333s 00:27:18.587 13:47:26 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:18.587 13:47:26 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:27:18.587 ************************************ 00:27:18.587 END TEST nvme_fdp 00:27:18.587 ************************************ 00:27:18.587 13:47:26 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:27:18.587 13:47:26 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:18.587 13:47:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:18.587 13:47:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:18.587 13:47:26 -- common/autotest_common.sh@10 -- # set +x 00:27:18.587 ************************************ 00:27:18.587 START TEST nvme_rpc 00:27:18.587 ************************************ 00:27:18.587 13:47:26 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:27:18.587 * Looking for test storage... 00:27:18.587 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:18.587 13:47:26 nvme_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:18.587 13:47:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:18.587 13:47:26 nvme_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:18.846 13:47:26 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.846 --rc genhtml_branch_coverage=1 00:27:18.846 --rc genhtml_function_coverage=1 00:27:18.846 --rc genhtml_legend=1 00:27:18.846 --rc geninfo_all_blocks=1 00:27:18.846 --rc geninfo_unexecuted_blocks=1 00:27:18.846 00:27:18.846 ' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.846 --rc genhtml_branch_coverage=1 00:27:18.846 --rc genhtml_function_coverage=1 00:27:18.846 --rc genhtml_legend=1 00:27:18.846 --rc geninfo_all_blocks=1 00:27:18.846 --rc geninfo_unexecuted_blocks=1 00:27:18.846 00:27:18.846 ' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.846 --rc genhtml_branch_coverage=1 00:27:18.846 --rc genhtml_function_coverage=1 00:27:18.846 --rc genhtml_legend=1 00:27:18.846 --rc geninfo_all_blocks=1 00:27:18.846 --rc geninfo_unexecuted_blocks=1 00:27:18.846 00:27:18.846 ' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:18.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:18.846 --rc genhtml_branch_coverage=1 00:27:18.846 --rc genhtml_function_coverage=1 00:27:18.846 --rc genhtml_legend=1 00:27:18.846 --rc geninfo_all_blocks=1 00:27:18.846 --rc geninfo_unexecuted_blocks=1 00:27:18.846 00:27:18.846 ' 00:27:18.846 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:18.846 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:27:18.846 13:47:26 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:27:18.847 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:27:18.847 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67706 00:27:18.847 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:18.847 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:27:18.847 13:47:26 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67706 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67706 ']' 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.847 13:47:26 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:19.106 [2024-11-20 13:47:26.597485] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:27:19.106 [2024-11-20 13:47:26.597628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67706 ] 00:27:19.106 [2024-11-20 13:47:26.764504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:19.366 [2024-11-20 13:47:26.899118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:19.366 [2024-11-20 13:47:26.899168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.304 13:47:27 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:20.304 13:47:27 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:27:20.304 13:47:27 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:27:20.564 Nvme0n1 00:27:20.564 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:27:20.564 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:27:20.824 request: 00:27:20.824 { 00:27:20.824 "bdev_name": "Nvme0n1", 00:27:20.824 "filename": "non_existing_file", 00:27:20.824 "method": "bdev_nvme_apply_firmware", 00:27:20.824 "req_id": 1 00:27:20.824 } 00:27:20.824 Got JSON-RPC error response 00:27:20.824 response: 00:27:20.824 { 00:27:20.824 "code": -32603, 00:27:20.824 "message": "open file failed." 00:27:20.824 } 00:27:20.824 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:27:20.824 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:27:20.824 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:21.083 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:27:21.083 13:47:28 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67706 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67706 ']' 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67706 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67706 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67706' 00:27:21.084 killing process with pid 67706 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67706 00:27:21.084 13:47:28 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67706 00:27:23.688 00:27:23.688 real 0m5.042s 00:27:23.688 user 0m9.365s 00:27:23.688 sys 0m0.748s 00:27:23.688 13:47:31 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:23.688 13:47:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:27:23.688 ************************************ 00:27:23.688 END TEST nvme_rpc 00:27:23.688 ************************************ 00:27:23.688 13:47:31 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:23.688 13:47:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:23.688 13:47:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:23.688 13:47:31 -- common/autotest_common.sh@10 -- # set +x 00:27:23.688 ************************************ 00:27:23.688 START TEST nvme_rpc_timeouts 00:27:23.688 ************************************ 00:27:23.688 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:27:23.688 * Looking for test storage... 00:27:23.688 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:23.688 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:23.688 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lcov --version 00:27:23.688 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:23.947 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:23.948 13:47:31 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.948 --rc genhtml_branch_coverage=1 00:27:23.948 --rc genhtml_function_coverage=1 00:27:23.948 --rc genhtml_legend=1 00:27:23.948 --rc geninfo_all_blocks=1 00:27:23.948 --rc geninfo_unexecuted_blocks=1 00:27:23.948 00:27:23.948 ' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.948 --rc genhtml_branch_coverage=1 00:27:23.948 --rc genhtml_function_coverage=1 00:27:23.948 --rc genhtml_legend=1 00:27:23.948 --rc geninfo_all_blocks=1 00:27:23.948 --rc geninfo_unexecuted_blocks=1 00:27:23.948 00:27:23.948 ' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.948 --rc genhtml_branch_coverage=1 00:27:23.948 --rc genhtml_function_coverage=1 00:27:23.948 --rc genhtml_legend=1 00:27:23.948 --rc geninfo_all_blocks=1 00:27:23.948 --rc geninfo_unexecuted_blocks=1 00:27:23.948 00:27:23.948 ' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:23.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:23.948 --rc genhtml_branch_coverage=1 00:27:23.948 --rc genhtml_function_coverage=1 00:27:23.948 --rc genhtml_legend=1 00:27:23.948 --rc geninfo_all_blocks=1 00:27:23.948 --rc geninfo_unexecuted_blocks=1 00:27:23.948 00:27:23.948 ' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67794 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67794 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67826 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:27:23.948 13:47:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67826 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67826 ']' 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.948 13:47:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:27:23.948 [2024-11-20 13:47:31.603836] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:27:23.948 [2024-11-20 13:47:31.603973] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67826 ] 00:27:24.207 [2024-11-20 13:47:31.785431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:24.467 [2024-11-20 13:47:31.931734] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.467 [2024-11-20 13:47:31.931796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.404 13:47:32 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:25.404 13:47:32 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:27:25.404 Checking default timeout settings: 00:27:25.404 13:47:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:27:25.404 13:47:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:25.663 Making settings changes with rpc: 00:27:25.663 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:27:25.663 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:27:25.922 Check default vs. modified settings: 00:27:25.922 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:27:25.922 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67794 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67794 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:27:26.491 Setting action_on_timeout is changed as expected. 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67794 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67794 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:27:26.491 Setting timeout_us is changed as expected. 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67794 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67794 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:27:26.491 Setting timeout_admin_us is changed as expected. 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67794 /tmp/settings_modified_67794 00:27:26.491 13:47:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67826 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67826 ']' 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67826 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67826 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:26.491 killing process with pid 67826 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67826' 00:27:26.491 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67826 00:27:26.492 13:47:34 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67826 00:27:29.782 RPC TIMEOUT SETTING TEST PASSED. 00:27:29.782 13:47:36 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:27:29.782 00:27:29.782 real 0m5.588s 00:27:29.782 user 0m10.493s 00:27:29.782 sys 0m0.932s 00:27:29.782 13:47:36 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:29.782 ************************************ 00:27:29.782 END TEST nvme_rpc_timeouts 00:27:29.782 ************************************ 00:27:29.782 13:47:36 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:27:29.782 13:47:36 -- spdk/autotest.sh@239 -- # uname -s 00:27:29.782 13:47:36 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:27:29.782 13:47:36 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:27:29.782 13:47:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:29.782 13:47:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:29.782 13:47:36 -- common/autotest_common.sh@10 -- # set +x 00:27:29.782 ************************************ 00:27:29.782 START TEST sw_hotplug 00:27:29.782 ************************************ 00:27:29.782 13:47:36 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:27:29.782 * Looking for test storage... 00:27:29.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1693 -- # lcov --version 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:29.782 13:47:37 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.782 --rc genhtml_branch_coverage=1 00:27:29.782 --rc genhtml_function_coverage=1 00:27:29.782 --rc genhtml_legend=1 00:27:29.782 --rc geninfo_all_blocks=1 00:27:29.782 --rc geninfo_unexecuted_blocks=1 00:27:29.782 00:27:29.782 ' 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.782 --rc genhtml_branch_coverage=1 00:27:29.782 --rc genhtml_function_coverage=1 00:27:29.782 --rc genhtml_legend=1 00:27:29.782 --rc geninfo_all_blocks=1 00:27:29.782 --rc geninfo_unexecuted_blocks=1 00:27:29.782 00:27:29.782 ' 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.782 --rc genhtml_branch_coverage=1 00:27:29.782 --rc genhtml_function_coverage=1 00:27:29.782 --rc genhtml_legend=1 00:27:29.782 --rc geninfo_all_blocks=1 00:27:29.782 --rc geninfo_unexecuted_blocks=1 00:27:29.782 00:27:29.782 ' 00:27:29.782 13:47:37 sw_hotplug -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:29.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:29.782 --rc genhtml_branch_coverage=1 00:27:29.782 --rc genhtml_function_coverage=1 00:27:29.782 --rc genhtml_legend=1 00:27:29.782 --rc geninfo_all_blocks=1 00:27:29.782 --rc geninfo_unexecuted_blocks=1 00:27:29.782 00:27:29.782 ' 00:27:29.782 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:30.041 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:30.299 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:30.299 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:30.299 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:30.299 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:30.299 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:27:30.299 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:27:30.299 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:27:30.299 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@233 -- # local class 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:27:30.299 13:47:37 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@18 -- # local i 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:27:30.300 13:47:37 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:27:30.300 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:27:30.300 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:27:30.300 13:47:37 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:30.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:31.127 Waiting for block devices as requested 00:27:31.127 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.387 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.387 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:27:31.387 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:27:36.719 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:27:36.719 13:47:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:27:36.719 13:47:44 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:36.976 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:27:37.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:37.235 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:27:37.495 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:27:38.060 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:38.060 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68717 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:27:38.060 13:47:45 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:27:38.060 13:47:45 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:27:38.319 Initializing NVMe Controllers 00:27:38.319 Attaching to 0000:00:10.0 00:27:38.319 Attaching to 0000:00:11.0 00:27:38.319 Attached to 0000:00:11.0 00:27:38.319 Attached to 0000:00:10.0 00:27:38.319 Initialization complete. Starting I/O... 00:27:38.319 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:27:38.319 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:27:38.319 00:27:39.253 QEMU NVMe Ctrl (12341 ): 1692 I/Os completed (+1692) 00:27:39.253 QEMU NVMe Ctrl (12340 ): 1693 I/Os completed (+1693) 00:27:39.253 00:27:40.646 QEMU NVMe Ctrl (12341 ): 3945 I/Os completed (+2253) 00:27:40.646 QEMU NVMe Ctrl (12340 ): 3963 I/Os completed (+2270) 00:27:40.646 00:27:41.585 QEMU NVMe Ctrl (12341 ): 6381 I/Os completed (+2436) 00:27:41.585 QEMU NVMe Ctrl (12340 ): 6399 I/Os completed (+2436) 00:27:41.585 00:27:42.518 QEMU NVMe Ctrl (12341 ): 8849 I/Os completed (+2468) 00:27:42.518 QEMU NVMe Ctrl (12340 ): 8867 I/Os completed (+2468) 00:27:42.518 00:27:43.453 QEMU NVMe Ctrl (12341 ): 11277 I/Os completed (+2428) 00:27:43.453 QEMU NVMe Ctrl (12340 ): 11302 I/Os completed (+2435) 00:27:43.453 00:27:44.020 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:44.020 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:44.020 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:44.020 [2024-11-20 13:47:51.730792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:44.020 Controller removed: QEMU NVMe Ctrl (12340 ) 00:27:44.020 [2024-11-20 13:47:51.732583] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.732687] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.732758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.732804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:27:44.020 [2024-11-20 13:47:51.735673] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.735767] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.735823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.020 [2024-11-20 13:47:51.735859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:44.279 [2024-11-20 13:47:51.770241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:44.279 Controller removed: QEMU NVMe Ctrl (12341 ) 00:27:44.279 [2024-11-20 13:47:51.771801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.771901] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.771963] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.772008] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:27:44.279 [2024-11-20 13:47:51.774660] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.774742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.774786] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 [2024-11-20 13:47:51.774847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:44.279 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:27:44.279 EAL: Scan for (pci) bus failed. 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:44.279 13:47:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:44.279 Attaching to 0000:00:10.0 00:27:44.279 Attached to 0000:00:10.0 00:27:44.279 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:27:44.279 00:27:44.538 13:47:52 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:44.538 13:47:52 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:44.538 13:47:52 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:44.538 Attaching to 0000:00:11.0 00:27:44.538 Attached to 0000:00:11.0 00:27:45.473 QEMU NVMe Ctrl (12340 ): 2544 I/Os completed (+2544) 00:27:45.473 QEMU NVMe Ctrl (12341 ): 2316 I/Os completed (+2316) 00:27:45.473 00:27:46.409 QEMU NVMe Ctrl (12340 ): 5036 I/Os completed (+2492) 00:27:46.409 QEMU NVMe Ctrl (12341 ): 4859 I/Os completed (+2543) 00:27:46.409 00:27:47.342 QEMU NVMe Ctrl (12340 ): 7336 I/Os completed (+2300) 00:27:47.342 QEMU NVMe Ctrl (12341 ): 7159 I/Os completed (+2300) 00:27:47.342 00:27:48.277 QEMU NVMe Ctrl (12340 ): 9672 I/Os completed (+2336) 00:27:48.277 QEMU NVMe Ctrl (12341 ): 9502 I/Os completed (+2343) 00:27:48.277 00:27:49.658 QEMU NVMe Ctrl (12340 ): 12023 I/Os completed (+2351) 00:27:49.658 QEMU NVMe Ctrl (12341 ): 11852 I/Os completed (+2350) 00:27:49.658 00:27:50.226 QEMU NVMe Ctrl (12340 ): 14322 I/Os completed (+2299) 00:27:50.226 QEMU NVMe Ctrl (12341 ): 14260 I/Os completed (+2408) 00:27:50.226 00:27:51.604 QEMU NVMe Ctrl (12340 ): 16646 I/Os completed (+2324) 00:27:51.604 QEMU NVMe Ctrl (12341 ): 16594 I/Os completed (+2334) 00:27:51.604 00:27:52.539 QEMU NVMe Ctrl (12340 ): 18918 I/Os completed (+2272) 00:27:52.539 QEMU NVMe Ctrl (12341 ): 18933 I/Os completed (+2339) 00:27:52.539 00:27:53.479 QEMU NVMe Ctrl (12340 ): 21218 I/Os completed (+2300) 00:27:53.479 QEMU NVMe Ctrl (12341 ): 21235 I/Os completed (+2302) 00:27:53.479 00:27:54.448 QEMU NVMe Ctrl (12340 ): 23246 I/Os completed (+2028) 00:27:54.448 QEMU NVMe Ctrl (12341 ): 23386 I/Os completed (+2151) 00:27:54.448 00:27:55.386 QEMU NVMe Ctrl (12340 ): 25098 I/Os completed (+1852) 00:27:55.386 QEMU NVMe Ctrl (12341 ): 25238 I/Os completed (+1852) 00:27:55.386 00:27:56.324 QEMU NVMe Ctrl (12340 ): 27180 I/Os completed (+2082) 00:27:56.324 QEMU NVMe Ctrl (12341 ): 27413 I/Os completed (+2175) 00:27:56.324 00:27:56.324 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:27:56.324 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:27:56.324 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:56.324 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:56.324 [2024-11-20 13:48:04.036515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:27:56.324 Controller removed: QEMU NVMe Ctrl (12340 ) 00:27:56.324 [2024-11-20 13:48:04.039288] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.324 [2024-11-20 13:48:04.039384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.324 [2024-11-20 13:48:04.039419] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.324 [2024-11-20 13:48:04.039453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:27:56.586 [2024-11-20 13:48:04.043531] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.043674] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.043778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.043853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:27:56.586 [2024-11-20 13:48:04.081031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:27:56.586 Controller removed: QEMU NVMe Ctrl (12341 ) 00:27:56.586 [2024-11-20 13:48:04.083463] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.083609] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.083659] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.083692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:27:56.586 [2024-11-20 13:48:04.087243] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.087384] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.087425] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 [2024-11-20 13:48:04.087455] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:27:56.586 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:27:56.586 Attaching to 0000:00:10.0 00:27:56.586 Attached to 0000:00:10.0 00:27:56.849 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:27:56.849 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:27:56.849 13:48:04 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:27:56.849 Attaching to 0000:00:11.0 00:27:56.849 Attached to 0000:00:11.0 00:27:57.417 QEMU NVMe Ctrl (12340 ): 1467 I/Os completed (+1467) 00:27:57.417 QEMU NVMe Ctrl (12341 ): 1259 I/Os completed (+1259) 00:27:57.417 00:27:58.356 QEMU NVMe Ctrl (12340 ): 3709 I/Os completed (+2242) 00:27:58.356 QEMU NVMe Ctrl (12341 ): 4083 I/Os completed (+2824) 00:27:58.356 00:27:59.293 QEMU NVMe Ctrl (12340 ): 6082 I/Os completed (+2373) 00:27:59.293 QEMU NVMe Ctrl (12341 ): 6879 I/Os completed (+2796) 00:27:59.293 00:28:00.227 QEMU NVMe Ctrl (12340 ): 8576 I/Os completed (+2494) 00:28:00.227 QEMU NVMe Ctrl (12341 ): 10019 I/Os completed (+3140) 00:28:00.227 00:28:01.605 QEMU NVMe Ctrl (12340 ): 10932 I/Os completed (+2356) 00:28:01.605 QEMU NVMe Ctrl (12341 ): 12861 I/Os completed (+2842) 00:28:01.605 00:28:02.542 QEMU NVMe Ctrl (12340 ): 13244 I/Os completed (+2312) 00:28:02.542 QEMU NVMe Ctrl (12341 ): 15375 I/Os completed (+2514) 00:28:02.542 00:28:03.497 QEMU NVMe Ctrl (12340 ): 15561 I/Os completed (+2317) 00:28:03.497 QEMU NVMe Ctrl (12341 ): 17690 I/Os completed (+2315) 00:28:03.497 00:28:04.434 QEMU NVMe Ctrl (12340 ): 17819 I/Os completed (+2258) 00:28:04.434 QEMU NVMe Ctrl (12341 ): 19970 I/Os completed (+2280) 00:28:04.434 00:28:05.369 QEMU NVMe Ctrl (12340 ): 20050 I/Os completed (+2231) 00:28:05.369 QEMU NVMe Ctrl (12341 ): 22264 I/Os completed (+2294) 00:28:05.369 00:28:06.345 QEMU NVMe Ctrl (12340 ): 22306 I/Os completed (+2256) 00:28:06.345 QEMU NVMe Ctrl (12341 ): 24707 I/Os completed (+2443) 00:28:06.345 00:28:07.279 QEMU NVMe Ctrl (12340 ): 24582 I/Os completed (+2276) 00:28:07.279 QEMU NVMe Ctrl (12341 ): 27091 I/Os completed (+2384) 00:28:07.279 00:28:08.212 QEMU NVMe Ctrl (12340 ): 26784 I/Os completed (+2202) 00:28:08.212 QEMU NVMe Ctrl (12341 ): 29286 I/Os completed (+2195) 00:28:08.212 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:08.779 [2024-11-20 13:48:16.393699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:08.779 Controller removed: QEMU NVMe Ctrl (12340 ) 00:28:08.779 [2024-11-20 13:48:16.399390] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.399797] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.399981] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.400144] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:08.779 [2024-11-20 13:48:16.411418] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.411637] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.411746] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.411850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:08.779 [2024-11-20 13:48:16.422561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:08.779 Controller removed: QEMU NVMe Ctrl (12341 ) 00:28:08.779 [2024-11-20 13:48:16.424816] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.424965] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.425045] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.425107] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:08.779 [2024-11-20 13:48:16.428964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.429034] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.429059] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 [2024-11-20 13:48:16.429076] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:28:08.779 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:08.779 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:28:08.779 EAL: Scan for (pci) bus failed. 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:09.037 Attaching to 0000:00:10.0 00:28:09.037 Attached to 0000:00:10.0 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:09.037 13:48:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:09.037 Attaching to 0000:00:11.0 00:28:09.037 Attached to 0000:00:11.0 00:28:09.037 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:28:09.037 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:28:09.037 [2024-11-20 13:48:16.736841] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:28:21.250 13:48:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:28:21.250 13:48:28 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:21.250 13:48:28 sw_hotplug -- common/autotest_common.sh@719 -- # time=43.00 00:28:21.250 13:48:28 sw_hotplug -- common/autotest_common.sh@720 -- # echo 43.00 00:28:21.250 13:48:28 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:28:21.250 13:48:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=43.00 00:28:21.250 13:48:28 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 43.00 2 00:28:21.250 remove_attach_helper took 43.00s to complete (handling 2 nvme drive(s)) 13:48:28 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68717 00:28:27.840 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68717) - No such process 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68717 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69254 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:28:27.840 13:48:34 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69254 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69254 ']' 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.840 13:48:34 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:27.840 [2024-11-20 13:48:34.846069] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:28:27.840 [2024-11-20 13:48:34.846274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69254 ] 00:28:27.840 [2024-11-20 13:48:35.029286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:27.840 [2024-11-20 13:48:35.157261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.413 13:48:36 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:28.413 13:48:36 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:28:28.413 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:28:28.413 13:48:36 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.413 13:48:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:28.413 13:48:36 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.413 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:28:28.413 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:28:28.672 13:48:36 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:28:28.672 13:48:36 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:28:28.672 13:48:36 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:28:28.672 13:48:36 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:28:28.672 13:48:36 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:28:28.672 13:48:36 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:35.244 13:48:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.244 13:48:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:35.244 [2024-11-20 13:48:42.215340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:35.244 [2024-11-20 13:48:42.218318] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.244 [2024-11-20 13:48:42.218431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.244 [2024-11-20 13:48:42.218492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.244 [2024-11-20 13:48:42.218547] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.244 [2024-11-20 13:48:42.218598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.244 [2024-11-20 13:48:42.218670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.244 [2024-11-20 13:48:42.218733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.244 [2024-11-20 13:48:42.218799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.244 [2024-11-20 13:48:42.218849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.244 [2024-11-20 13:48:42.218905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.244 [2024-11-20 13:48:42.218939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.244 [2024-11-20 13:48:42.218990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.244 13:48:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:28:35.244 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:28:35.244 [2024-11-20 13:48:42.614575] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:35.244 [2024-11-20 13:48:42.617263] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.244 [2024-11-20 13:48:42.617357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.244 [2024-11-20 13:48:42.617410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.244 [2024-11-20 13:48:42.617459] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.245 [2024-11-20 13:48:42.617487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.245 [2024-11-20 13:48:42.617560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.245 [2024-11-20 13:48:42.617600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.245 [2024-11-20 13:48:42.617611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.245 [2024-11-20 13:48:42.617624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.245 [2024-11-20 13:48:42.617635] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:35.245 [2024-11-20 13:48:42.617647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.245 [2024-11-20 13:48:42.617657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:35.245 13:48:42 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.245 13:48:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:35.245 13:48:42 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:35.245 13:48:42 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:35.504 13:48:43 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:28:47.718 [2024-11-20 13:48:55.190664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:28:47.718 [2024-11-20 13:48:55.194801] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.718 [2024-11-20 13:48:55.194987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.718 [2024-11-20 13:48:55.195084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.718 [2024-11-20 13:48:55.195180] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.718 [2024-11-20 13:48:55.195261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.718 [2024-11-20 13:48:55.195343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.718 [2024-11-20 13:48:55.195422] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.718 [2024-11-20 13:48:55.195476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.718 [2024-11-20 13:48:55.195498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.718 [2024-11-20 13:48:55.195519] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.718 [2024-11-20 13:48:55.195536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.718 [2024-11-20 13:48:55.195555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:47.718 13:48:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:28:47.718 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:28:47.976 [2024-11-20 13:48:55.589828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:28:47.976 [2024-11-20 13:48:55.592453] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.976 [2024-11-20 13:48:55.592563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.976 [2024-11-20 13:48:55.592591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.976 [2024-11-20 13:48:55.592618] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.976 [2024-11-20 13:48:55.592632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.976 [2024-11-20 13:48:55.592644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.976 [2024-11-20 13:48:55.592657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.976 [2024-11-20 13:48:55.592669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.976 [2024-11-20 13:48:55.592681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.977 [2024-11-20 13:48:55.592692] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:28:47.977 [2024-11-20 13:48:55.592704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.977 [2024-11-20 13:48:55.592731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:28:48.234 13:48:55 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.234 13:48:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:28:48.234 13:48:55 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:48.234 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:28:48.492 13:48:55 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:28:48.492 13:48:56 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:00.703 [2024-11-20 13:49:08.165885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:00.703 [2024-11-20 13:49:08.168657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.703 [2024-11-20 13:49:08.168711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.703 [2024-11-20 13:49:08.168746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.703 [2024-11-20 13:49:08.168774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.703 [2024-11-20 13:49:08.168786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.703 [2024-11-20 13:49:08.168802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.703 [2024-11-20 13:49:08.168814] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.703 [2024-11-20 13:49:08.168826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.703 [2024-11-20 13:49:08.168837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.703 [2024-11-20 13:49:08.168850] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.703 [2024-11-20 13:49:08.168861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.703 [2024-11-20 13:49:08.168873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:00.703 13:49:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:29:00.703 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:00.962 [2024-11-20 13:49:08.565135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:00.962 [2024-11-20 13:49:08.567656] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.962 [2024-11-20 13:49:08.567707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.962 [2024-11-20 13:49:08.567741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.962 [2024-11-20 13:49:08.567765] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.962 [2024-11-20 13:49:08.567778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.963 [2024-11-20 13:49:08.567789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.963 [2024-11-20 13:49:08.567803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.963 [2024-11-20 13:49:08.567813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.963 [2024-11-20 13:49:08.567829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:00.963 [2024-11-20 13:49:08.567839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:00.963 [2024-11-20 13:49:08.567851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:00.963 [2024-11-20 13:49:08.567861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:01.221 13:49:08 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:01.221 13:49:08 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:01.221 13:49:08 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:01.221 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:01.481 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:01.481 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:01.481 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:01.481 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:01.481 13:49:08 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:01.481 13:49:09 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:01.481 13:49:09 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:01.481 13:49:09 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.02 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.02 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 00:29:13.804 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:29:13.804 13:49:21 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:29:13.804 13:49:21 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:20.374 13:49:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:20.374 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:20.374 13:49:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:20.374 [2024-11-20 13:49:27.268609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:20.374 [2024-11-20 13:49:27.270576] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.270699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.270759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.270793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.270805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.270819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.270831] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.270848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.270859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.270873] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.270884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.270899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 13:49:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:20.375 13:49:27 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:20.375 13:49:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:20.375 13:49:27 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:29:20.375 13:49:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:20.375 [2024-11-20 13:49:27.967317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:20.375 [2024-11-20 13:49:27.969515] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.969630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.969656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.969683] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.969699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.969710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.969737] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.969748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.969763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.375 [2024-11-20 13:49:27.969774] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:20.375 [2024-11-20 13:49:27.969788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:20.375 [2024-11-20 13:49:27.969798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:20.689 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:20.689 13:49:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:20.689 13:49:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:20.689 13:49:28 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:20.948 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:21.207 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:21.207 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:21.207 13:49:28 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:33.420 [2024-11-20 13:49:40.842645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:33.420 [2024-11-20 13:49:40.844491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.420 [2024-11-20 13:49:40.844600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.420 [2024-11-20 13:49:40.844682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.420 [2024-11-20 13:49:40.844779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.420 [2024-11-20 13:49:40.844836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.420 [2024-11-20 13:49:40.844907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.420 [2024-11-20 13:49:40.844982] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.420 [2024-11-20 13:49:40.845026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.420 [2024-11-20 13:49:40.845095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.420 [2024-11-20 13:49:40.845161] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.420 [2024-11-20 13:49:40.845207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.420 [2024-11-20 13:49:40.845279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.420 13:49:40 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:29:33.420 13:49:40 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:33.680 [2024-11-20 13:49:41.241885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:33.680 [2024-11-20 13:49:41.244557] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.680 [2024-11-20 13:49:41.244606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.680 [2024-11-20 13:49:41.244626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.680 [2024-11-20 13:49:41.244652] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.680 [2024-11-20 13:49:41.244668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.680 [2024-11-20 13:49:41.244680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.680 [2024-11-20 13:49:41.244694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.680 [2024-11-20 13:49:41.244706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.680 [2024-11-20 13:49:41.244734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.680 [2024-11-20 13:49:41.244747] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:33.680 [2024-11-20 13:49:41.244759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:33.680 [2024-11-20 13:49:41.244770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:33.680 13:49:41 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.680 13:49:41 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:33.680 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:33.680 13:49:41 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:33.940 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:34.200 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:34.200 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:34.200 13:49:41 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:46.449 13:49:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.449 13:49:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:46.449 13:49:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:29:46.449 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:29:46.449 [2024-11-20 13:49:53.817929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:29:46.449 [2024-11-20 13:49:53.820264] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.449 [2024-11-20 13:49:53.820375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.449 [2024-11-20 13:49:53.820453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.449 [2024-11-20 13:49:53.820543] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.449 [2024-11-20 13:49:53.820595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.449 [2024-11-20 13:49:53.820661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.449 [2024-11-20 13:49:53.820743] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.449 [2024-11-20 13:49:53.820792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.449 [2024-11-20 13:49:53.820859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.449 [2024-11-20 13:49:53.820938] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.449 [2024-11-20 13:49:53.820983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.450 [2024-11-20 13:49:53.821051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:46.450 13:49:53 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.450 13:49:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:46.450 13:49:53 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:29:46.450 13:49:53 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:29:46.708 [2024-11-20 13:49:54.217143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:29:46.708 [2024-11-20 13:49:54.219586] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.708 [2024-11-20 13:49:54.219686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.708 [2024-11-20 13:49:54.219754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.708 [2024-11-20 13:49:54.219832] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.708 [2024-11-20 13:49:54.219873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.708 [2024-11-20 13:49:54.219922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.708 [2024-11-20 13:49:54.219969] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.708 [2024-11-20 13:49:54.219996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.708 [2024-11-20 13:49:54.220035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.708 [2024-11-20 13:49:54.220049] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:29:46.708 [2024-11-20 13:49:54.220067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:29:46.708 [2024-11-20 13:49:54.220077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.708 [2024-11-20 13:49:54.220093] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:29:46.708 [2024-11-20 13:49:54.220107] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:29:46.708 [2024-11-20 13:49:54.220118] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:29:46.708 [2024-11-20 13:49:54.220126] bdev_nvme.c:5568:aer_cb: *WARNING*: AER request execute failed 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:46.708 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:46.708 13:49:54 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:46.708 13:49:54 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:46.708 13:49:54 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:29:46.968 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:29:47.228 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:29:47.228 13:49:54 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.55 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.55 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.55 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.55 2 00:29:59.444 remove_attach_helper took 45.55s to complete (handling 2 nvme drive(s)) 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:29:59.444 13:50:06 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69254 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69254 ']' 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69254 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69254 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69254' 00:29:59.444 killing process with pid 69254 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69254 00:29:59.444 13:50:06 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69254 00:30:01.979 13:50:09 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:30:02.238 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:02.807 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:02.807 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:30:03.066 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:30:03.066 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:30:03.066 00:30:03.066 real 2m33.740s 00:30:03.066 user 1m54.068s 00:30:03.066 sys 0m19.547s 00:30:03.066 13:50:10 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.066 13:50:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:30:03.066 ************************************ 00:30:03.066 END TEST sw_hotplug 00:30:03.066 ************************************ 00:30:03.066 13:50:10 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:30:03.066 13:50:10 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:30:03.066 13:50:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:03.066 13:50:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:03.066 13:50:10 -- common/autotest_common.sh@10 -- # set +x 00:30:03.066 ************************************ 00:30:03.066 START TEST nvme_xnvme 00:30:03.066 ************************************ 00:30:03.066 13:50:10 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:30:03.327 * Looking for test storage... 00:30:03.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.327 13:50:10 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.328 13:50:10 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.328 --rc genhtml_branch_coverage=1 00:30:03.328 --rc genhtml_function_coverage=1 00:30:03.328 --rc genhtml_legend=1 00:30:03.328 --rc geninfo_all_blocks=1 00:30:03.328 --rc geninfo_unexecuted_blocks=1 00:30:03.328 00:30:03.328 ' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.328 --rc genhtml_branch_coverage=1 00:30:03.328 --rc genhtml_function_coverage=1 00:30:03.328 --rc genhtml_legend=1 00:30:03.328 --rc geninfo_all_blocks=1 00:30:03.328 --rc geninfo_unexecuted_blocks=1 00:30:03.328 00:30:03.328 ' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.328 --rc genhtml_branch_coverage=1 00:30:03.328 --rc genhtml_function_coverage=1 00:30:03.328 --rc genhtml_legend=1 00:30:03.328 --rc geninfo_all_blocks=1 00:30:03.328 --rc geninfo_unexecuted_blocks=1 00:30:03.328 00:30:03.328 ' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.328 --rc genhtml_branch_coverage=1 00:30:03.328 --rc genhtml_function_coverage=1 00:30:03.328 --rc genhtml_legend=1 00:30:03.328 --rc geninfo_all_blocks=1 00:30:03.328 --rc geninfo_unexecuted_blocks=1 00:30:03.328 00:30:03.328 ' 00:30:03.328 13:50:10 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:30:03.328 13:50:10 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:30:03.328 13:50:10 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:30:03.328 13:50:10 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:30:03.329 13:50:10 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:30:03.329 13:50:10 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:03.329 13:50:10 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:30:03.329 13:50:11 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:30:03.329 #define SPDK_CONFIG_H 00:30:03.329 #define SPDK_CONFIG_AIO_FSDEV 1 00:30:03.329 #define SPDK_CONFIG_APPS 1 00:30:03.329 #define SPDK_CONFIG_ARCH native 00:30:03.329 #define SPDK_CONFIG_ASAN 1 00:30:03.329 #undef SPDK_CONFIG_AVAHI 00:30:03.329 #undef SPDK_CONFIG_CET 00:30:03.329 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:30:03.329 #define SPDK_CONFIG_COVERAGE 1 00:30:03.329 #define SPDK_CONFIG_CROSS_PREFIX 00:30:03.329 #undef SPDK_CONFIG_CRYPTO 00:30:03.329 #undef SPDK_CONFIG_CRYPTO_MLX5 00:30:03.329 #undef SPDK_CONFIG_CUSTOMOCF 00:30:03.329 #undef SPDK_CONFIG_DAOS 00:30:03.329 #define SPDK_CONFIG_DAOS_DIR 00:30:03.329 #define SPDK_CONFIG_DEBUG 1 00:30:03.329 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:30:03.329 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:30:03.329 #define SPDK_CONFIG_DPDK_INC_DIR 00:30:03.329 #define SPDK_CONFIG_DPDK_LIB_DIR 00:30:03.329 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:30:03.329 #undef SPDK_CONFIG_DPDK_UADK 00:30:03.329 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:30:03.329 #define SPDK_CONFIG_EXAMPLES 1 00:30:03.329 #undef SPDK_CONFIG_FC 00:30:03.329 #define SPDK_CONFIG_FC_PATH 00:30:03.329 #define SPDK_CONFIG_FIO_PLUGIN 1 00:30:03.329 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:30:03.329 #define SPDK_CONFIG_FSDEV 1 00:30:03.329 #undef SPDK_CONFIG_FUSE 00:30:03.329 #undef SPDK_CONFIG_FUZZER 00:30:03.329 #define SPDK_CONFIG_FUZZER_LIB 00:30:03.329 #undef SPDK_CONFIG_GOLANG 00:30:03.329 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:30:03.329 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:30:03.329 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:30:03.329 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:30:03.329 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:30:03.329 #undef SPDK_CONFIG_HAVE_LIBBSD 00:30:03.329 #undef SPDK_CONFIG_HAVE_LZ4 00:30:03.329 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:30:03.329 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:30:03.329 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:30:03.329 #define SPDK_CONFIG_IDXD 1 00:30:03.329 #define SPDK_CONFIG_IDXD_KERNEL 1 00:30:03.329 #undef SPDK_CONFIG_IPSEC_MB 00:30:03.329 #define SPDK_CONFIG_IPSEC_MB_DIR 00:30:03.329 #define SPDK_CONFIG_ISAL 1 00:30:03.329 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:30:03.329 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:30:03.329 #define SPDK_CONFIG_LIBDIR 00:30:03.329 #undef SPDK_CONFIG_LTO 00:30:03.329 #define SPDK_CONFIG_MAX_LCORES 128 00:30:03.329 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:30:03.329 #define SPDK_CONFIG_NVME_CUSE 1 00:30:03.329 #undef SPDK_CONFIG_OCF 00:30:03.329 #define SPDK_CONFIG_OCF_PATH 00:30:03.329 #define SPDK_CONFIG_OPENSSL_PATH 00:30:03.329 #undef SPDK_CONFIG_PGO_CAPTURE 00:30:03.329 #define SPDK_CONFIG_PGO_DIR 00:30:03.329 #undef SPDK_CONFIG_PGO_USE 00:30:03.329 #define SPDK_CONFIG_PREFIX /usr/local 00:30:03.329 #undef SPDK_CONFIG_RAID5F 00:30:03.329 #undef SPDK_CONFIG_RBD 00:30:03.329 #define SPDK_CONFIG_RDMA 1 00:30:03.329 #define SPDK_CONFIG_RDMA_PROV verbs 00:30:03.329 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:30:03.329 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:30:03.329 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:30:03.329 #define SPDK_CONFIG_SHARED 1 00:30:03.329 #undef SPDK_CONFIG_SMA 00:30:03.329 #define SPDK_CONFIG_TESTS 1 00:30:03.329 #undef SPDK_CONFIG_TSAN 00:30:03.329 #define SPDK_CONFIG_UBLK 1 00:30:03.329 #define SPDK_CONFIG_UBSAN 1 00:30:03.329 #undef SPDK_CONFIG_UNIT_TESTS 00:30:03.329 #undef SPDK_CONFIG_URING 00:30:03.329 #define SPDK_CONFIG_URING_PATH 00:30:03.329 #undef SPDK_CONFIG_URING_ZNS 00:30:03.329 #undef SPDK_CONFIG_USDT 00:30:03.329 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:30:03.329 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:30:03.329 #undef SPDK_CONFIG_VFIO_USER 00:30:03.330 #define SPDK_CONFIG_VFIO_USER_DIR 00:30:03.330 #define SPDK_CONFIG_VHOST 1 00:30:03.330 #define SPDK_CONFIG_VIRTIO 1 00:30:03.330 #undef SPDK_CONFIG_VTUNE 00:30:03.330 #define SPDK_CONFIG_VTUNE_DIR 00:30:03.330 #define SPDK_CONFIG_WERROR 1 00:30:03.330 #define SPDK_CONFIG_WPDK_DIR 00:30:03.330 #define SPDK_CONFIG_XNVME 1 00:30:03.330 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:30:03.330 13:50:11 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:30:03.330 13:50:11 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.330 13:50:11 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.330 13:50:11 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.330 13:50:11 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.330 13:50:11 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.330 13:50:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.330 13:50:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.330 13:50:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.330 13:50:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:30:03.330 13:50:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.330 13:50:11 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:30:03.330 13:50:11 nvme_xnvme -- pm/common@68 -- # uname -s 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:30:03.592 13:50:11 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:30:03.593 13:50:11 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:30:03.593 13:50:11 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@58 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:03.593 13:50:11 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70605 ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70605 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.zZrAIk 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.zZrAIk/tests/xnvme /tmp/spdk.zZrAIk 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975801856 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592117248 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6261665792 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493775872 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506571776 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13975801856 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5592117248 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6266281984 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6266429440 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:30:03.594 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253273600 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253285888 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=94173921280 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5528858624 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:30:03.595 * Looking for test storage... 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13975801856 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1680 -- # set -o errtrace 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1685 -- # true 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1687 -- # xtrace_fd 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:03.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.595 --rc genhtml_branch_coverage=1 00:30:03.595 --rc genhtml_function_coverage=1 00:30:03.595 --rc genhtml_legend=1 00:30:03.595 --rc geninfo_all_blocks=1 00:30:03.595 --rc geninfo_unexecuted_blocks=1 00:30:03.595 00:30:03.595 ' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:03.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.595 --rc genhtml_branch_coverage=1 00:30:03.595 --rc genhtml_function_coverage=1 00:30:03.595 --rc genhtml_legend=1 00:30:03.595 --rc geninfo_all_blocks=1 00:30:03.595 --rc geninfo_unexecuted_blocks=1 00:30:03.595 00:30:03.595 ' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:03.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.595 --rc genhtml_branch_coverage=1 00:30:03.595 --rc genhtml_function_coverage=1 00:30:03.595 --rc genhtml_legend=1 00:30:03.595 --rc geninfo_all_blocks=1 00:30:03.595 --rc geninfo_unexecuted_blocks=1 00:30:03.595 00:30:03.595 ' 00:30:03.595 13:50:11 nvme_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:03.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:03.595 --rc genhtml_branch_coverage=1 00:30:03.595 --rc genhtml_function_coverage=1 00:30:03.595 --rc genhtml_legend=1 00:30:03.595 --rc geninfo_all_blocks=1 00:30:03.595 --rc geninfo_unexecuted_blocks=1 00:30:03.595 00:30:03.595 ' 00:30:03.595 13:50:11 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:03.595 13:50:11 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:03.595 13:50:11 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.595 13:50:11 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.595 13:50:11 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.595 13:50:11 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:30:03.595 13:50:11 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:03.595 13:50:11 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:30:03.596 13:50:11 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:30:04.165 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:30:04.425 Waiting for block devices as requested 00:30:04.425 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.684 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.684 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:30:04.684 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:30:09.962 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:30:09.962 13:50:17 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:30:10.221 13:50:17 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:30:10.221 13:50:17 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:30:10.481 13:50:18 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:30:10.481 No valid GPT data, bailing 00:30:10.481 13:50:18 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:30:10.481 13:50:18 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:30:10.481 13:50:18 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:30:10.481 13:50:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:10.481 13:50:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:10.481 13:50:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:10.481 ************************************ 00:30:10.481 START TEST xnvme_rpc 00:30:10.481 ************************************ 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70999 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70999 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70999 ']' 00:30:10.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:10.481 13:50:18 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:10.741 [2024-11-20 13:50:18.225948] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:10.741 [2024-11-20 13:50:18.226113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70999 ] 00:30:10.741 [2024-11-20 13:50:18.409593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.001 [2024-11-20 13:50:18.545400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:11.939 xnvme_bdev 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:30:11.939 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:11.940 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70999 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70999 ']' 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70999 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70999 00:30:12.199 killing process with pid 70999 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70999' 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70999 00:30:12.199 13:50:19 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70999 00:30:14.735 00:30:14.735 real 0m4.282s 00:30:14.735 user 0m4.418s 00:30:14.735 sys 0m0.522s 00:30:14.735 ************************************ 00:30:14.735 END TEST xnvme_rpc 00:30:14.735 ************************************ 00:30:14.735 13:50:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:14.735 13:50:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:14.735 13:50:22 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:30:14.735 13:50:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:14.735 13:50:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:14.735 13:50:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:14.995 ************************************ 00:30:14.995 START TEST xnvme_bdevperf 00:30:14.995 ************************************ 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:14.995 13:50:22 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:14.995 { 00:30:14.995 "subsystems": [ 00:30:14.995 { 00:30:14.995 "subsystem": "bdev", 00:30:14.995 "config": [ 00:30:14.995 { 00:30:14.995 "params": { 00:30:14.995 "io_mechanism": "libaio", 00:30:14.995 "conserve_cpu": false, 00:30:14.995 "filename": "/dev/nvme0n1", 00:30:14.995 "name": "xnvme_bdev" 00:30:14.995 }, 00:30:14.995 "method": "bdev_xnvme_create" 00:30:14.995 }, 00:30:14.995 { 00:30:14.995 "method": "bdev_wait_for_examine" 00:30:14.995 } 00:30:14.995 ] 00:30:14.995 } 00:30:14.995 ] 00:30:14.995 } 00:30:14.995 [2024-11-20 13:50:22.555149] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:14.995 [2024-11-20 13:50:22.555383] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71090 ] 00:30:15.256 [2024-11-20 13:50:22.734883] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.256 [2024-11-20 13:50:22.859992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.825 Running I/O for 5 seconds... 00:30:17.696 38815.00 IOPS, 151.62 MiB/s [2024-11-20T13:50:26.348Z] 38068.00 IOPS, 148.70 MiB/s [2024-11-20T13:50:27.281Z] 37425.67 IOPS, 146.19 MiB/s [2024-11-20T13:50:28.684Z] 37228.50 IOPS, 145.42 MiB/s 00:30:20.965 Latency(us) 00:30:20.965 [2024-11-20T13:50:28.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.965 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:30:20.965 xnvme_bdev : 5.00 36982.33 144.46 0.00 0.00 1727.03 211.95 5208.54 00:30:20.965 [2024-11-20T13:50:28.684Z] =================================================================================================================== 00:30:20.965 [2024-11-20T13:50:28.684Z] Total : 36982.33 144.46 0.00 0.00 1727.03 211.95 5208.54 00:30:21.898 13:50:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:21.898 13:50:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:30:21.898 13:50:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:21.898 13:50:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:21.898 13:50:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:22.160 { 00:30:22.160 "subsystems": [ 00:30:22.160 { 00:30:22.160 "subsystem": "bdev", 00:30:22.160 "config": [ 00:30:22.160 { 00:30:22.160 "params": { 00:30:22.160 "io_mechanism": "libaio", 00:30:22.160 "conserve_cpu": false, 00:30:22.160 "filename": "/dev/nvme0n1", 00:30:22.160 "name": "xnvme_bdev" 00:30:22.160 }, 00:30:22.160 "method": "bdev_xnvme_create" 00:30:22.160 }, 00:30:22.160 { 00:30:22.160 "method": "bdev_wait_for_examine" 00:30:22.160 } 00:30:22.160 ] 00:30:22.160 } 00:30:22.160 ] 00:30:22.160 } 00:30:22.160 [2024-11-20 13:50:29.668867] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:22.160 [2024-11-20 13:50:29.669064] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71169 ] 00:30:22.160 [2024-11-20 13:50:29.849682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:22.419 [2024-11-20 13:50:29.974689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:22.677 Running I/O for 5 seconds... 00:30:24.989 39340.00 IOPS, 153.67 MiB/s [2024-11-20T13:50:33.644Z] 38085.00 IOPS, 148.77 MiB/s [2024-11-20T13:50:34.587Z] 37869.33 IOPS, 147.93 MiB/s [2024-11-20T13:50:35.530Z] 37525.25 IOPS, 146.58 MiB/s 00:30:27.811 Latency(us) 00:30:27.811 [2024-11-20T13:50:35.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.811 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:30:27.811 xnvme_bdev : 5.00 36809.21 143.79 0.00 0.00 1734.66 153.82 7841.43 00:30:27.811 [2024-11-20T13:50:35.530Z] =================================================================================================================== 00:30:27.811 [2024-11-20T13:50:35.530Z] Total : 36809.21 143.79 0.00 0.00 1734.66 153.82 7841.43 00:30:29.188 00:30:29.188 real 0m14.122s 00:30:29.188 user 0m5.427s 00:30:29.188 sys 0m6.047s 00:30:29.188 13:50:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:29.188 13:50:36 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:29.188 ************************************ 00:30:29.188 END TEST xnvme_bdevperf 00:30:29.188 ************************************ 00:30:29.188 13:50:36 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:30:29.188 13:50:36 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:29.188 13:50:36 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:29.188 13:50:36 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:29.188 ************************************ 00:30:29.188 START TEST xnvme_fio_plugin 00:30:29.188 ************************************ 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:29.188 13:50:36 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:29.188 { 00:30:29.188 "subsystems": [ 00:30:29.188 { 00:30:29.188 "subsystem": "bdev", 00:30:29.188 "config": [ 00:30:29.188 { 00:30:29.188 "params": { 00:30:29.188 "io_mechanism": "libaio", 00:30:29.188 "conserve_cpu": false, 00:30:29.188 "filename": "/dev/nvme0n1", 00:30:29.188 "name": "xnvme_bdev" 00:30:29.188 }, 00:30:29.188 "method": "bdev_xnvme_create" 00:30:29.188 }, 00:30:29.188 { 00:30:29.188 "method": "bdev_wait_for_examine" 00:30:29.188 } 00:30:29.188 ] 00:30:29.188 } 00:30:29.188 ] 00:30:29.188 } 00:30:29.188 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:30:29.188 fio-3.35 00:30:29.188 Starting 1 thread 00:30:35.761 00:30:35.761 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71295: Wed Nov 20 13:50:42 2024 00:30:35.761 read: IOPS=43.9k, BW=171MiB/s (180MB/s)(857MiB/5001msec) 00:30:35.761 slat (usec): min=3, max=627, avg=19.10, stdev=25.92 00:30:35.761 clat (usec): min=42, max=6350, avg=870.51, stdev=595.31 00:30:35.761 lat (usec): min=149, max=6423, avg=889.61, stdev=601.59 00:30:35.761 clat percentiles (usec): 00:30:35.761 | 1.00th=[ 184], 5.00th=[ 265], 10.00th=[ 330], 20.00th=[ 453], 00:30:35.761 | 30.00th=[ 562], 40.00th=[ 652], 50.00th=[ 750], 60.00th=[ 848], 00:30:35.761 | 70.00th=[ 971], 80.00th=[ 1123], 90.00th=[ 1434], 95.00th=[ 2024], 00:30:35.761 | 99.00th=[ 3359], 99.50th=[ 3916], 99.90th=[ 4752], 99.95th=[ 5014], 00:30:35.761 | 99.99th=[ 5604] 00:30:35.761 bw ( KiB/s): min=136184, max=224136, per=100.00%, avg=178106.67, stdev=28168.03, samples=9 00:30:35.761 iops : min=34046, max=56034, avg=44526.67, stdev=7042.01, samples=9 00:30:35.761 lat (usec) : 50=0.01%, 100=0.01%, 250=4.18%, 500=20.23%, 750=25.90% 00:30:35.761 lat (usec) : 1000=22.05% 00:30:35.761 lat (msec) : 2=22.56%, 4=4.63%, 10=0.44% 00:30:35.761 cpu : usr=30.56%, sys=51.54%, ctx=81, majf=0, minf=764 00:30:35.761 IO depths : 1=0.2%, 2=1.3%, 4=4.3%, 8=10.8%, 16=25.3%, 32=56.4%, >=64=1.8% 00:30:35.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.761 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:30:35.761 issued rwts: total=219447,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.761 latency : target=0, window=0, percentile=100.00%, depth=64 00:30:35.761 00:30:35.761 Run status group 0 (all jobs): 00:30:35.761 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=857MiB (899MB), run=5001-5001msec 00:30:36.710 ----------------------------------------------------- 00:30:36.710 Suppressions used: 00:30:36.710 count bytes template 00:30:36.710 1 11 /usr/src/fio/parse.c 00:30:36.710 1 8 libtcmalloc_minimal.so 00:30:36.710 1 904 libcrypto.so 00:30:36.710 ----------------------------------------------------- 00:30:36.710 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:30:36.710 13:50:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:30:36.710 { 00:30:36.710 "subsystems": [ 00:30:36.710 { 00:30:36.710 "subsystem": "bdev", 00:30:36.710 "config": [ 00:30:36.710 { 00:30:36.710 "params": { 00:30:36.710 "io_mechanism": "libaio", 00:30:36.710 "conserve_cpu": false, 00:30:36.710 "filename": "/dev/nvme0n1", 00:30:36.710 "name": "xnvme_bdev" 00:30:36.710 }, 00:30:36.710 "method": "bdev_xnvme_create" 00:30:36.710 }, 00:30:36.710 { 00:30:36.710 "method": "bdev_wait_for_examine" 00:30:36.710 } 00:30:36.710 ] 00:30:36.710 } 00:30:36.710 ] 00:30:36.710 } 00:30:36.975 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:30:36.975 fio-3.35 00:30:36.975 Starting 1 thread 00:30:43.538 00:30:43.538 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71393: Wed Nov 20 13:50:50 2024 00:30:43.538 write: IOPS=43.5k, BW=170MiB/s (178MB/s)(851MiB/5001msec); 0 zone resets 00:30:43.538 slat (usec): min=3, max=836, avg=19.59, stdev=29.92 00:30:43.538 clat (usec): min=87, max=6340, avg=869.42, stdev=606.21 00:30:43.538 lat (usec): min=110, max=6410, avg=889.01, stdev=612.60 00:30:43.538 clat percentiles (usec): 00:30:43.538 | 1.00th=[ 194], 5.00th=[ 277], 10.00th=[ 338], 20.00th=[ 445], 00:30:43.538 | 30.00th=[ 545], 40.00th=[ 644], 50.00th=[ 750], 60.00th=[ 848], 00:30:43.538 | 70.00th=[ 963], 80.00th=[ 1106], 90.00th=[ 1418], 95.00th=[ 2024], 00:30:43.538 | 99.00th=[ 3523], 99.50th=[ 3982], 99.90th=[ 4686], 99.95th=[ 4883], 00:30:43.538 | 99.99th=[ 5342] 00:30:43.538 bw ( KiB/s): min=141208, max=222320, per=100.00%, avg=175953.78, stdev=28566.31, samples=9 00:30:43.538 iops : min=35302, max=55580, avg=43988.44, stdev=7141.58, samples=9 00:30:43.538 lat (usec) : 100=0.01%, 250=3.15%, 500=22.18%, 750=24.98%, 1000=22.72% 00:30:43.538 lat (msec) : 2=21.84%, 4=4.63%, 10=0.49% 00:30:43.538 cpu : usr=29.60%, sys=53.52%, ctx=103, majf=0, minf=765 00:30:43.538 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=10.9%, 16=26.0%, 32=56.4%, >=64=1.8% 00:30:43.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:43.538 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:30:43.538 issued rwts: total=0,217742,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:43.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:30:43.538 00:30:43.538 Run status group 0 (all jobs): 00:30:43.538 WRITE: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=851MiB (892MB), run=5001-5001msec 00:30:44.476 ----------------------------------------------------- 00:30:44.476 Suppressions used: 00:30:44.476 count bytes template 00:30:44.476 1 11 /usr/src/fio/parse.c 00:30:44.476 1 8 libtcmalloc_minimal.so 00:30:44.476 1 904 libcrypto.so 00:30:44.476 ----------------------------------------------------- 00:30:44.476 00:30:44.476 00:30:44.476 real 0m15.262s 00:30:44.476 user 0m7.214s 00:30:44.476 sys 0m5.984s 00:30:44.476 ************************************ 00:30:44.476 13:50:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.476 13:50:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:30:44.476 END TEST xnvme_fio_plugin 00:30:44.476 ************************************ 00:30:44.476 13:50:51 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:30:44.476 13:50:51 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:30:44.476 13:50:51 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:30:44.476 13:50:51 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:30:44.476 13:50:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:44.476 13:50:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.476 13:50:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:44.476 ************************************ 00:30:44.476 START TEST xnvme_rpc 00:30:44.476 ************************************ 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71479 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71479 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71479 ']' 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:44.476 13:50:51 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:44.476 [2024-11-20 13:50:52.075520] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:44.476 [2024-11-20 13:50:52.075775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71479 ] 00:30:44.735 [2024-11-20 13:50:52.238531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.735 [2024-11-20 13:50:52.365993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.683 xnvme_bdev 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:30:45.683 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71479 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71479 ']' 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71479 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:30:45.961 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71479 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:45.962 killing process with pid 71479 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71479' 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71479 00:30:45.962 13:50:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71479 00:30:48.500 00:30:48.500 real 0m4.079s 00:30:48.500 user 0m4.189s 00:30:48.500 sys 0m0.499s 00:30:48.500 13:50:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:48.500 13:50:56 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:30:48.500 ************************************ 00:30:48.500 END TEST xnvme_rpc 00:30:48.500 ************************************ 00:30:48.500 13:50:56 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:30:48.500 13:50:56 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:48.500 13:50:56 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:48.500 13:50:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:30:48.500 ************************************ 00:30:48.501 START TEST xnvme_bdevperf 00:30:48.501 ************************************ 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:48.501 13:50:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:48.501 { 00:30:48.501 "subsystems": [ 00:30:48.501 { 00:30:48.501 "subsystem": "bdev", 00:30:48.501 "config": [ 00:30:48.501 { 00:30:48.501 "params": { 00:30:48.501 "io_mechanism": "libaio", 00:30:48.501 "conserve_cpu": true, 00:30:48.501 "filename": "/dev/nvme0n1", 00:30:48.501 "name": "xnvme_bdev" 00:30:48.501 }, 00:30:48.501 "method": "bdev_xnvme_create" 00:30:48.501 }, 00:30:48.501 { 00:30:48.501 "method": "bdev_wait_for_examine" 00:30:48.501 } 00:30:48.501 ] 00:30:48.501 } 00:30:48.501 ] 00:30:48.501 } 00:30:48.501 [2024-11-20 13:50:56.212910] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:48.501 [2024-11-20 13:50:56.213109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71565 ] 00:30:48.761 [2024-11-20 13:50:56.388915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:49.022 [2024-11-20 13:50:56.513385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:49.281 Running I/O for 5 seconds... 00:30:51.609 41655.00 IOPS, 162.71 MiB/s [2024-11-20T13:50:59.946Z] 39621.00 IOPS, 154.77 MiB/s [2024-11-20T13:51:01.324Z] 39252.33 IOPS, 153.33 MiB/s [2024-11-20T13:51:02.023Z] 38943.25 IOPS, 152.12 MiB/s 00:30:54.304 Latency(us) 00:30:54.304 [2024-11-20T13:51:02.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:54.304 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:30:54.304 xnvme_bdev : 5.00 38571.64 150.67 0.00 0.00 1655.56 170.82 9959.18 00:30:54.304 [2024-11-20T13:51:02.023Z] =================================================================================================================== 00:30:54.304 [2024-11-20T13:51:02.023Z] Total : 38571.64 150.67 0.00 0.00 1655.56 170.82 9959.18 00:30:55.683 13:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:30:55.683 13:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:30:55.683 13:51:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:30:55.683 13:51:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:30:55.683 13:51:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:55.683 { 00:30:55.683 "subsystems": [ 00:30:55.683 { 00:30:55.683 "subsystem": "bdev", 00:30:55.683 "config": [ 00:30:55.683 { 00:30:55.683 "params": { 00:30:55.683 "io_mechanism": "libaio", 00:30:55.683 "conserve_cpu": true, 00:30:55.683 "filename": "/dev/nvme0n1", 00:30:55.683 "name": "xnvme_bdev" 00:30:55.683 }, 00:30:55.683 "method": "bdev_xnvme_create" 00:30:55.683 }, 00:30:55.683 { 00:30:55.683 "method": "bdev_wait_for_examine" 00:30:55.683 } 00:30:55.683 ] 00:30:55.683 } 00:30:55.683 ] 00:30:55.683 } 00:30:55.683 [2024-11-20 13:51:03.180750] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:30:55.683 [2024-11-20 13:51:03.180966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:30:55.683 [2024-11-20 13:51:03.357847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:55.943 [2024-11-20 13:51:03.480939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.201 Running I/O for 5 seconds... 00:30:58.148 37879.00 IOPS, 147.96 MiB/s [2024-11-20T13:51:07.243Z] 37062.50 IOPS, 144.78 MiB/s [2024-11-20T13:51:08.179Z] 35821.67 IOPS, 139.93 MiB/s [2024-11-20T13:51:09.120Z] 35021.25 IOPS, 136.80 MiB/s 00:31:01.401 Latency(us) 00:31:01.401 [2024-11-20T13:51:09.120Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.401 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:31:01.401 xnvme_bdev : 5.00 34685.27 135.49 0.00 0.00 1840.78 171.71 9043.40 00:31:01.401 [2024-11-20T13:51:09.120Z] =================================================================================================================== 00:31:01.401 [2024-11-20T13:51:09.120Z] Total : 34685.27 135.49 0.00 0.00 1840.78 171.71 9043.40 00:31:02.352 00:31:02.352 real 0m13.954s 00:31:02.352 user 0m5.339s 00:31:02.352 sys 0m6.104s 00:31:02.611 13:51:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:02.611 13:51:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:02.611 ************************************ 00:31:02.611 END TEST xnvme_bdevperf 00:31:02.611 ************************************ 00:31:02.611 13:51:10 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:31:02.611 13:51:10 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:02.611 13:51:10 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:02.611 13:51:10 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:02.611 ************************************ 00:31:02.611 START TEST xnvme_fio_plugin 00:31:02.611 ************************************ 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:02.611 13:51:10 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:02.611 { 00:31:02.611 "subsystems": [ 00:31:02.611 { 00:31:02.611 "subsystem": "bdev", 00:31:02.611 "config": [ 00:31:02.611 { 00:31:02.611 "params": { 00:31:02.611 "io_mechanism": "libaio", 00:31:02.611 "conserve_cpu": true, 00:31:02.611 "filename": "/dev/nvme0n1", 00:31:02.611 "name": "xnvme_bdev" 00:31:02.611 }, 00:31:02.611 "method": "bdev_xnvme_create" 00:31:02.611 }, 00:31:02.611 { 00:31:02.611 "method": "bdev_wait_for_examine" 00:31:02.611 } 00:31:02.611 ] 00:31:02.611 } 00:31:02.611 ] 00:31:02.611 } 00:31:02.871 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:02.871 fio-3.35 00:31:02.871 Starting 1 thread 00:31:09.454 00:31:09.454 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71771: Wed Nov 20 13:51:16 2024 00:31:09.454 read: IOPS=43.7k, BW=171MiB/s (179MB/s)(855MiB/5001msec) 00:31:09.454 slat (usec): min=4, max=731, avg=19.48, stdev=26.15 00:31:09.454 clat (usec): min=60, max=6152, avg=861.41, stdev=598.98 00:31:09.454 lat (usec): min=144, max=6206, avg=880.90, stdev=605.15 00:31:09.454 clat percentiles (usec): 00:31:09.454 | 1.00th=[ 178], 5.00th=[ 251], 10.00th=[ 318], 20.00th=[ 433], 00:31:09.454 | 30.00th=[ 545], 40.00th=[ 652], 50.00th=[ 750], 60.00th=[ 857], 00:31:09.454 | 70.00th=[ 971], 80.00th=[ 1123], 90.00th=[ 1385], 95.00th=[ 1958], 00:31:09.454 | 99.00th=[ 3458], 99.50th=[ 4015], 99.90th=[ 4686], 99.95th=[ 4948], 00:31:09.454 | 99.99th=[ 5276] 00:31:09.454 bw ( KiB/s): min=156936, max=207712, per=99.66%, avg=174392.22, stdev=16083.47, samples=9 00:31:09.454 iops : min=39234, max=51928, avg=43598.00, stdev=4020.94, samples=9 00:31:09.454 lat (usec) : 100=0.01%, 250=4.91%, 500=21.35%, 750=23.50%, 1000=22.59% 00:31:09.454 lat (msec) : 2=22.84%, 4=4.30%, 10=0.50% 00:31:09.454 cpu : usr=29.56%, sys=52.44%, ctx=105, majf=0, minf=764 00:31:09.454 IO depths : 1=0.2%, 2=1.3%, 4=4.4%, 8=11.2%, 16=25.7%, 32=55.5%, >=64=1.8% 00:31:09.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:09.454 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.7%, >=64=0.0% 00:31:09.454 issued rwts: total=218778,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:09.454 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:09.454 00:31:09.454 Run status group 0 (all jobs): 00:31:09.454 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=855MiB (896MB), run=5001-5001msec 00:31:10.021 ----------------------------------------------------- 00:31:10.021 Suppressions used: 00:31:10.021 count bytes template 00:31:10.021 1 11 /usr/src/fio/parse.c 00:31:10.021 1 8 libtcmalloc_minimal.so 00:31:10.021 1 904 libcrypto.so 00:31:10.021 ----------------------------------------------------- 00:31:10.021 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:10.021 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:10.281 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:10.281 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:10.281 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:10.281 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:10.281 13:51:17 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:10.281 { 00:31:10.281 "subsystems": [ 00:31:10.281 { 00:31:10.281 "subsystem": "bdev", 00:31:10.281 "config": [ 00:31:10.281 { 00:31:10.281 "params": { 00:31:10.281 "io_mechanism": "libaio", 00:31:10.281 "conserve_cpu": true, 00:31:10.281 "filename": "/dev/nvme0n1", 00:31:10.281 "name": "xnvme_bdev" 00:31:10.281 }, 00:31:10.281 "method": "bdev_xnvme_create" 00:31:10.281 }, 00:31:10.281 { 00:31:10.281 "method": "bdev_wait_for_examine" 00:31:10.281 } 00:31:10.281 ] 00:31:10.281 } 00:31:10.281 ] 00:31:10.281 } 00:31:10.281 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:10.281 fio-3.35 00:31:10.281 Starting 1 thread 00:31:16.852 00:31:16.852 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71868: Wed Nov 20 13:51:23 2024 00:31:16.852 write: IOPS=43.9k, BW=171MiB/s (180MB/s)(857MiB/5001msec); 0 zone resets 00:31:16.852 slat (usec): min=4, max=1398, avg=19.23, stdev=27.99 00:31:16.852 clat (usec): min=107, max=6514, avg=870.41, stdev=592.94 00:31:16.852 lat (usec): min=121, max=6605, avg=889.64, stdev=598.94 00:31:16.852 clat percentiles (usec): 00:31:16.852 | 1.00th=[ 186], 5.00th=[ 269], 10.00th=[ 338], 20.00th=[ 457], 00:31:16.852 | 30.00th=[ 562], 40.00th=[ 660], 50.00th=[ 758], 60.00th=[ 857], 00:31:16.852 | 70.00th=[ 963], 80.00th=[ 1106], 90.00th=[ 1401], 95.00th=[ 1975], 00:31:16.852 | 99.00th=[ 3458], 99.50th=[ 3949], 99.90th=[ 4686], 99.95th=[ 4883], 00:31:16.852 | 99.99th=[ 5407] 00:31:16.852 bw ( KiB/s): min=142216, max=200320, per=99.97%, avg=175444.44, stdev=22871.89, samples=9 00:31:16.852 iops : min=35554, max=50080, avg=43861.11, stdev=5717.97, samples=9 00:31:16.852 lat (usec) : 250=3.94%, 500=20.06%, 750=25.46%, 1000=23.24% 00:31:16.852 lat (msec) : 2=22.40%, 4=4.46%, 10=0.45% 00:31:16.852 cpu : usr=30.80%, sys=52.50%, ctx=44, majf=0, minf=765 00:31:16.852 IO depths : 1=0.2%, 2=1.2%, 4=4.2%, 8=10.8%, 16=25.3%, 32=56.6%, >=64=1.8% 00:31:16.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.853 complete : 0=0.0%, 4=98.2%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.6%, >=64=0.0% 00:31:16.853 issued rwts: total=0,219405,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:16.853 00:31:16.853 Run status group 0 (all jobs): 00:31:16.853 WRITE: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=857MiB (899MB), run=5001-5001msec 00:31:17.834 ----------------------------------------------------- 00:31:17.834 Suppressions used: 00:31:17.834 count bytes template 00:31:17.834 1 11 /usr/src/fio/parse.c 00:31:17.834 1 8 libtcmalloc_minimal.so 00:31:17.834 1 904 libcrypto.so 00:31:17.834 ----------------------------------------------------- 00:31:17.834 00:31:17.834 ************************************ 00:31:17.834 END TEST xnvme_fio_plugin 00:31:17.834 ************************************ 00:31:17.834 00:31:17.834 real 0m15.232s 00:31:17.834 user 0m7.177s 00:31:17.834 sys 0m6.001s 00:31:17.834 13:51:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:17.834 13:51:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:31:17.834 13:51:25 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:31:17.834 13:51:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:17.834 13:51:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:17.834 13:51:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:17.834 ************************************ 00:31:17.834 START TEST xnvme_rpc 00:31:17.834 ************************************ 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71959 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71959 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71959 ']' 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:17.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:17.834 13:51:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:17.834 [2024-11-20 13:51:25.530106] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:31:17.834 [2024-11-20 13:51:25.530247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71959 ] 00:31:18.093 [2024-11-20 13:51:25.703469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:18.352 [2024-11-20 13:51:25.838170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.292 xnvme_bdev 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.292 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71959 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71959 ']' 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71959 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:19.293 13:51:26 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71959 00:31:19.552 killing process with pid 71959 00:31:19.552 13:51:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:19.552 13:51:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:19.552 13:51:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71959' 00:31:19.552 13:51:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71959 00:31:19.552 13:51:27 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71959 00:31:22.113 00:31:22.113 real 0m4.284s 00:31:22.113 user 0m4.418s 00:31:22.113 sys 0m0.497s 00:31:22.113 ************************************ 00:31:22.113 END TEST xnvme_rpc 00:31:22.113 ************************************ 00:31:22.113 13:51:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:22.113 13:51:29 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:22.113 13:51:29 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:31:22.113 13:51:29 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:22.113 13:51:29 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:22.113 13:51:29 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:22.113 ************************************ 00:31:22.113 START TEST xnvme_bdevperf 00:31:22.113 ************************************ 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:22.113 13:51:29 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:22.114 13:51:29 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:22.114 { 00:31:22.114 "subsystems": [ 00:31:22.114 { 00:31:22.114 "subsystem": "bdev", 00:31:22.114 "config": [ 00:31:22.114 { 00:31:22.114 "params": { 00:31:22.114 "io_mechanism": "io_uring", 00:31:22.114 "conserve_cpu": false, 00:31:22.114 "filename": "/dev/nvme0n1", 00:31:22.114 "name": "xnvme_bdev" 00:31:22.114 }, 00:31:22.114 "method": "bdev_xnvme_create" 00:31:22.114 }, 00:31:22.114 { 00:31:22.114 "method": "bdev_wait_for_examine" 00:31:22.114 } 00:31:22.114 ] 00:31:22.114 } 00:31:22.114 ] 00:31:22.114 } 00:31:22.372 [2024-11-20 13:51:29.848445] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:31:22.372 [2024-11-20 13:51:29.848660] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72041 ] 00:31:22.372 [2024-11-20 13:51:30.026822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:22.631 [2024-11-20 13:51:30.158302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.889 Running I/O for 5 seconds... 00:31:25.192 51975.00 IOPS, 203.03 MiB/s [2024-11-20T13:51:33.843Z] 56321.00 IOPS, 220.00 MiB/s [2024-11-20T13:51:34.773Z] 56803.00 IOPS, 221.89 MiB/s [2024-11-20T13:51:35.704Z] 52517.00 IOPS, 205.14 MiB/s 00:31:27.985 Latency(us) 00:31:27.985 [2024-11-20T13:51:35.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.985 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:31:27.985 xnvme_bdev : 5.00 51576.82 201.47 0.00 0.00 1236.26 384.56 4235.51 00:31:27.985 [2024-11-20T13:51:35.704Z] =================================================================================================================== 00:31:27.985 [2024-11-20T13:51:35.704Z] Total : 51576.82 201.47 0.00 0.00 1236.26 384.56 4235.51 00:31:29.372 13:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:29.372 13:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:31:29.372 13:51:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:29.372 13:51:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:29.372 13:51:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:29.372 { 00:31:29.372 "subsystems": [ 00:31:29.372 { 00:31:29.372 "subsystem": "bdev", 00:31:29.372 "config": [ 00:31:29.372 { 00:31:29.372 "params": { 00:31:29.372 "io_mechanism": "io_uring", 00:31:29.372 "conserve_cpu": false, 00:31:29.372 "filename": "/dev/nvme0n1", 00:31:29.372 "name": "xnvme_bdev" 00:31:29.372 }, 00:31:29.372 "method": "bdev_xnvme_create" 00:31:29.372 }, 00:31:29.372 { 00:31:29.372 "method": "bdev_wait_for_examine" 00:31:29.372 } 00:31:29.373 ] 00:31:29.373 } 00:31:29.373 ] 00:31:29.373 } 00:31:29.635 [2024-11-20 13:51:37.111705] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:31:29.635 [2024-11-20 13:51:37.111851] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72127 ] 00:31:29.635 [2024-11-20 13:51:37.294520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:29.893 [2024-11-20 13:51:37.456878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.460 Running I/O for 5 seconds... 00:31:32.336 31104.00 IOPS, 121.50 MiB/s [2024-11-20T13:51:40.989Z] 34944.00 IOPS, 136.50 MiB/s [2024-11-20T13:51:41.924Z] 36778.67 IOPS, 143.67 MiB/s [2024-11-20T13:51:43.301Z] 35520.00 IOPS, 138.75 MiB/s 00:31:35.582 Latency(us) 00:31:35.582 [2024-11-20T13:51:43.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.582 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:31:35.582 xnvme_bdev : 5.00 34634.26 135.29 0.00 0.00 1840.91 865.70 5637.81 00:31:35.582 [2024-11-20T13:51:43.301Z] =================================================================================================================== 00:31:35.582 [2024-11-20T13:51:43.301Z] Total : 34634.26 135.29 0.00 0.00 1840.91 865.70 5637.81 00:31:36.960 00:31:36.960 real 0m14.507s 00:31:36.960 user 0m8.036s 00:31:36.960 sys 0m6.271s 00:31:36.960 13:51:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:36.960 13:51:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:36.960 ************************************ 00:31:36.960 END TEST xnvme_bdevperf 00:31:36.960 ************************************ 00:31:36.960 13:51:44 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:31:36.960 13:51:44 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:36.960 13:51:44 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:36.960 13:51:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:36.960 ************************************ 00:31:36.960 START TEST xnvme_fio_plugin 00:31:36.960 ************************************ 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:36.960 13:51:44 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:36.960 { 00:31:36.960 "subsystems": [ 00:31:36.960 { 00:31:36.960 "subsystem": "bdev", 00:31:36.960 "config": [ 00:31:36.960 { 00:31:36.960 "params": { 00:31:36.960 "io_mechanism": "io_uring", 00:31:36.960 "conserve_cpu": false, 00:31:36.960 "filename": "/dev/nvme0n1", 00:31:36.960 "name": "xnvme_bdev" 00:31:36.960 }, 00:31:36.960 "method": "bdev_xnvme_create" 00:31:36.960 }, 00:31:36.960 { 00:31:36.960 "method": "bdev_wait_for_examine" 00:31:36.960 } 00:31:36.960 ] 00:31:36.960 } 00:31:36.960 ] 00:31:36.960 } 00:31:36.960 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:36.960 fio-3.35 00:31:36.960 Starting 1 thread 00:31:43.538 00:31:43.538 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72252: Wed Nov 20 13:51:50 2024 00:31:43.538 read: IOPS=34.8k, BW=136MiB/s (143MB/s)(681MiB/5001msec) 00:31:43.538 slat (nsec): min=2818, max=72487, avg=5669.47, stdev=2480.63 00:31:43.538 clat (usec): min=717, max=5025, avg=1611.42, stdev=373.14 00:31:43.538 lat (usec): min=720, max=5035, avg=1617.08, stdev=374.57 00:31:43.538 clat percentiles (usec): 00:31:43.538 | 1.00th=[ 898], 5.00th=[ 1029], 10.00th=[ 1123], 20.00th=[ 1270], 00:31:43.538 | 30.00th=[ 1401], 40.00th=[ 1516], 50.00th=[ 1631], 60.00th=[ 1713], 00:31:43.538 | 70.00th=[ 1811], 80.00th=[ 1909], 90.00th=[ 2057], 95.00th=[ 2212], 00:31:43.538 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 2999], 99.95th=[ 3195], 00:31:43.538 | 99.99th=[ 4883] 00:31:43.538 bw ( KiB/s): min=131584, max=159232, per=100.00%, avg=140515.56, stdev=8662.91, samples=9 00:31:43.538 iops : min=32896, max=39808, avg=35128.89, stdev=2165.73, samples=9 00:31:43.538 lat (usec) : 750=0.03%, 1000=3.71% 00:31:43.538 lat (msec) : 2=83.19%, 4=13.04%, 10=0.04% 00:31:43.538 cpu : usr=36.00%, sys=62.94%, ctx=11, majf=0, minf=762 00:31:43.538 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:31:43.538 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:43.538 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:31:43.538 issued rwts: total=174208,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:43.538 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:43.538 00:31:43.538 Run status group 0 (all jobs): 00:31:43.538 READ: bw=136MiB/s (143MB/s), 136MiB/s-136MiB/s (143MB/s-143MB/s), io=681MiB (714MB), run=5001-5001msec 00:31:44.475 ----------------------------------------------------- 00:31:44.475 Suppressions used: 00:31:44.475 count bytes template 00:31:44.475 1 11 /usr/src/fio/parse.c 00:31:44.475 1 8 libtcmalloc_minimal.so 00:31:44.475 1 904 libcrypto.so 00:31:44.475 ----------------------------------------------------- 00:31:44.475 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:44.475 13:51:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:31:44.475 { 00:31:44.475 "subsystems": [ 00:31:44.475 { 00:31:44.475 "subsystem": "bdev", 00:31:44.475 "config": [ 00:31:44.475 { 00:31:44.475 "params": { 00:31:44.475 "io_mechanism": "io_uring", 00:31:44.475 "conserve_cpu": false, 00:31:44.475 "filename": "/dev/nvme0n1", 00:31:44.475 "name": "xnvme_bdev" 00:31:44.475 }, 00:31:44.475 "method": "bdev_xnvme_create" 00:31:44.475 }, 00:31:44.475 { 00:31:44.475 "method": "bdev_wait_for_examine" 00:31:44.475 } 00:31:44.475 ] 00:31:44.475 } 00:31:44.475 ] 00:31:44.475 } 00:31:44.735 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:31:44.735 fio-3.35 00:31:44.735 Starting 1 thread 00:31:51.306 00:31:51.306 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72355: Wed Nov 20 13:51:58 2024 00:31:51.306 write: IOPS=27.3k, BW=106MiB/s (112MB/s)(533MiB/5001msec); 0 zone resets 00:31:51.306 slat (usec): min=2, max=168, avg= 8.29, stdev= 3.14 00:31:51.306 clat (usec): min=968, max=3283, avg=2024.50, stdev=348.22 00:31:51.306 lat (usec): min=971, max=3301, avg=2032.79, stdev=349.68 00:31:51.306 clat percentiles (usec): 00:31:51.306 | 1.00th=[ 1156], 5.00th=[ 1369], 10.00th=[ 1549], 20.00th=[ 1778], 00:31:51.306 | 30.00th=[ 1876], 40.00th=[ 1958], 50.00th=[ 2040], 60.00th=[ 2114], 00:31:51.306 | 70.00th=[ 2212], 80.00th=[ 2311], 90.00th=[ 2474], 95.00th=[ 2573], 00:31:51.306 | 99.00th=[ 2769], 99.50th=[ 2868], 99.90th=[ 3064], 99.95th=[ 3130], 00:31:51.306 | 99.99th=[ 3228] 00:31:51.306 bw ( KiB/s): min=97280, max=133120, per=100.00%, avg=109568.00, stdev=10861.16, samples=9 00:31:51.306 iops : min=24320, max=33280, avg=27392.00, stdev=2715.29, samples=9 00:31:51.306 lat (usec) : 1000=0.04% 00:31:51.306 lat (msec) : 2=44.89%, 4=55.08% 00:31:51.306 cpu : usr=39.80%, sys=58.90%, ctx=15, majf=0, minf=763 00:31:51.306 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:31:51.306 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:51.306 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:31:51.306 issued rwts: total=0,136320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:51.306 latency : target=0, window=0, percentile=100.00%, depth=64 00:31:51.306 00:31:51.306 Run status group 0 (all jobs): 00:31:51.306 WRITE: bw=106MiB/s (112MB/s), 106MiB/s-106MiB/s (112MB/s-112MB/s), io=533MiB (558MB), run=5001-5001msec 00:31:51.876 ----------------------------------------------------- 00:31:51.876 Suppressions used: 00:31:51.876 count bytes template 00:31:51.876 1 11 /usr/src/fio/parse.c 00:31:51.876 1 8 libtcmalloc_minimal.so 00:31:51.876 1 904 libcrypto.so 00:31:51.876 ----------------------------------------------------- 00:31:51.876 00:31:51.876 00:31:51.876 real 0m15.236s 00:31:51.876 user 0m8.080s 00:31:51.876 sys 0m6.786s 00:31:51.876 13:51:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:51.876 13:51:59 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:31:51.876 ************************************ 00:31:51.876 END TEST xnvme_fio_plugin 00:31:51.876 ************************************ 00:31:52.134 13:51:59 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:31:52.134 13:51:59 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:31:52.134 13:51:59 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:31:52.134 13:51:59 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:31:52.134 13:51:59 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:52.134 13:51:59 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:52.134 13:51:59 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:52.134 ************************************ 00:31:52.134 START TEST xnvme_rpc 00:31:52.134 ************************************ 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72440 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72440 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72440 ']' 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:52.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:52.134 13:51:59 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:52.134 [2024-11-20 13:51:59.729581] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:31:52.134 [2024-11-20 13:51:59.729807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72440 ] 00:31:52.393 [2024-11-20 13:51:59.910203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:52.393 [2024-11-20 13:52:00.037093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:53.344 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:53.344 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:31:53.344 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:31:53.344 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.344 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.638 xnvme_bdev 00:31:53.638 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.638 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72440 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72440 ']' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72440 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72440 00:31:53.639 killing process with pid 72440 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72440' 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72440 00:31:53.639 13:52:01 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72440 00:31:56.181 ************************************ 00:31:56.181 END TEST xnvme_rpc 00:31:56.181 ************************************ 00:31:56.181 00:31:56.181 real 0m4.194s 00:31:56.181 user 0m4.322s 00:31:56.181 sys 0m0.524s 00:31:56.181 13:52:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.181 13:52:03 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:31:56.181 13:52:03 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:31:56.181 13:52:03 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:31:56.181 13:52:03 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.181 13:52:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:31:56.181 ************************************ 00:31:56.181 START TEST xnvme_bdevperf 00:31:56.181 ************************************ 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:31:56.181 13:52:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:31:56.440 { 00:31:56.440 "subsystems": [ 00:31:56.440 { 00:31:56.440 "subsystem": "bdev", 00:31:56.440 "config": [ 00:31:56.440 { 00:31:56.440 "params": { 00:31:56.440 "io_mechanism": "io_uring", 00:31:56.440 "conserve_cpu": true, 00:31:56.440 "filename": "/dev/nvme0n1", 00:31:56.440 "name": "xnvme_bdev" 00:31:56.440 }, 00:31:56.440 "method": "bdev_xnvme_create" 00:31:56.440 }, 00:31:56.440 { 00:31:56.440 "method": "bdev_wait_for_examine" 00:31:56.440 } 00:31:56.440 ] 00:31:56.440 } 00:31:56.440 ] 00:31:56.440 } 00:31:56.440 [2024-11-20 13:52:03.964353] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:31:56.440 [2024-11-20 13:52:03.964576] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72523 ] 00:31:56.440 [2024-11-20 13:52:04.143220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.699 [2024-11-20 13:52:04.274228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.031 Running I/O for 5 seconds... 00:31:59.343 46778.00 IOPS, 182.73 MiB/s [2024-11-20T13:52:07.998Z] 40380.50 IOPS, 157.74 MiB/s [2024-11-20T13:52:08.935Z] 37032.00 IOPS, 144.66 MiB/s [2024-11-20T13:52:09.872Z] 35566.00 IOPS, 138.93 MiB/s 00:32:02.153 Latency(us) 00:32:02.153 [2024-11-20T13:52:09.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:02.153 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:32:02.153 xnvme_bdev : 5.00 34189.89 133.55 0.00 0.00 1865.62 321.96 9329.58 00:32:02.153 [2024-11-20T13:52:09.872Z] =================================================================================================================== 00:32:02.153 [2024-11-20T13:52:09.872Z] Total : 34189.89 133.55 0.00 0.00 1865.62 321.96 9329.58 00:32:03.533 13:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:03.533 13:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:32:03.533 13:52:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:03.533 13:52:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:03.533 13:52:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:03.533 { 00:32:03.533 "subsystems": [ 00:32:03.533 { 00:32:03.533 "subsystem": "bdev", 00:32:03.533 "config": [ 00:32:03.533 { 00:32:03.533 "params": { 00:32:03.533 "io_mechanism": "io_uring", 00:32:03.533 "conserve_cpu": true, 00:32:03.533 "filename": "/dev/nvme0n1", 00:32:03.533 "name": "xnvme_bdev" 00:32:03.533 }, 00:32:03.533 "method": "bdev_xnvme_create" 00:32:03.533 }, 00:32:03.533 { 00:32:03.533 "method": "bdev_wait_for_examine" 00:32:03.533 } 00:32:03.533 ] 00:32:03.533 } 00:32:03.533 ] 00:32:03.533 } 00:32:03.533 [2024-11-20 13:52:10.906254] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:03.533 [2024-11-20 13:52:10.906480] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:32:03.533 [2024-11-20 13:52:11.085234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.533 [2024-11-20 13:52:11.224458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:04.100 Running I/O for 5 seconds... 00:32:06.031 28416.00 IOPS, 111.00 MiB/s [2024-11-20T13:52:14.689Z] 29440.00 IOPS, 115.00 MiB/s [2024-11-20T13:52:16.073Z] 30613.33 IOPS, 119.58 MiB/s [2024-11-20T13:52:16.682Z] 30464.00 IOPS, 119.00 MiB/s [2024-11-20T13:52:16.682Z] 29798.40 IOPS, 116.40 MiB/s 00:32:08.963 Latency(us) 00:32:08.963 [2024-11-20T13:52:16.682Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:08.963 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:32:08.963 xnvme_bdev : 5.01 29744.45 116.19 0.00 0.00 2143.94 754.81 7669.72 00:32:08.963 [2024-11-20T13:52:16.682Z] =================================================================================================================== 00:32:08.963 [2024-11-20T13:52:16.682Z] Total : 29744.45 116.19 0.00 0.00 2143.94 754.81 7669.72 00:32:10.341 00:32:10.341 real 0m14.116s 00:32:10.341 user 0m8.228s 00:32:10.341 sys 0m5.423s 00:32:10.341 13:52:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.341 13:52:17 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:10.341 ************************************ 00:32:10.341 END TEST xnvme_bdevperf 00:32:10.341 ************************************ 00:32:10.341 13:52:18 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:32:10.341 13:52:18 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:10.341 13:52:18 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.341 13:52:18 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:10.341 ************************************ 00:32:10.341 START TEST xnvme_fio_plugin 00:32:10.341 ************************************ 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:10.341 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:10.599 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:10.599 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:10.599 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:10.599 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:10.600 13:52:18 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:10.600 { 00:32:10.600 "subsystems": [ 00:32:10.600 { 00:32:10.600 "subsystem": "bdev", 00:32:10.600 "config": [ 00:32:10.600 { 00:32:10.600 "params": { 00:32:10.600 "io_mechanism": "io_uring", 00:32:10.600 "conserve_cpu": true, 00:32:10.600 "filename": "/dev/nvme0n1", 00:32:10.600 "name": "xnvme_bdev" 00:32:10.600 }, 00:32:10.600 "method": "bdev_xnvme_create" 00:32:10.600 }, 00:32:10.600 { 00:32:10.600 "method": "bdev_wait_for_examine" 00:32:10.600 } 00:32:10.600 ] 00:32:10.600 } 00:32:10.600 ] 00:32:10.600 } 00:32:10.600 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:10.600 fio-3.35 00:32:10.600 Starting 1 thread 00:32:17.172 00:32:17.172 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72737: Wed Nov 20 13:52:24 2024 00:32:17.172 read: IOPS=29.7k, BW=116MiB/s (122MB/s)(581MiB/5002msec) 00:32:17.172 slat (usec): min=2, max=210, avg= 6.78, stdev= 3.32 00:32:17.172 clat (usec): min=873, max=4795, avg=1884.13, stdev=437.61 00:32:17.172 lat (usec): min=876, max=4852, avg=1890.91, stdev=439.67 00:32:17.172 clat percentiles (usec): 00:32:17.172 | 1.00th=[ 1020], 5.00th=[ 1172], 10.00th=[ 1287], 20.00th=[ 1500], 00:32:17.172 | 30.00th=[ 1663], 40.00th=[ 1778], 50.00th=[ 1876], 60.00th=[ 1975], 00:32:17.172 | 70.00th=[ 2114], 80.00th=[ 2245], 90.00th=[ 2474], 95.00th=[ 2638], 00:32:17.172 | 99.00th=[ 2868], 99.50th=[ 2966], 99.90th=[ 3294], 99.95th=[ 3523], 00:32:17.172 | 99.99th=[ 4555] 00:32:17.172 bw ( KiB/s): min=100352, max=155537, per=100.00%, avg=119738.78, stdev=19575.66, samples=9 00:32:17.172 iops : min=25088, max=38884, avg=29934.67, stdev=4893.86, samples=9 00:32:17.172 lat (usec) : 1000=0.68% 00:32:17.172 lat (msec) : 2=60.94%, 4=38.34%, 10=0.04% 00:32:17.172 cpu : usr=50.01%, sys=46.27%, ctx=61, majf=0, minf=762 00:32:17.172 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:32:17.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:17.172 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:32:17.172 issued rwts: total=148672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:17.172 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:17.172 00:32:17.172 Run status group 0 (all jobs): 00:32:17.172 READ: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=581MiB (609MB), run=5002-5002msec 00:32:18.111 ----------------------------------------------------- 00:32:18.111 Suppressions used: 00:32:18.111 count bytes template 00:32:18.111 1 11 /usr/src/fio/parse.c 00:32:18.111 1 8 libtcmalloc_minimal.so 00:32:18.111 1 904 libcrypto.so 00:32:18.111 ----------------------------------------------------- 00:32:18.111 00:32:18.111 13:52:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:18.112 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:18.371 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:18.371 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:18.371 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:18.371 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:18.371 13:52:25 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:18.371 { 00:32:18.371 "subsystems": [ 00:32:18.371 { 00:32:18.371 "subsystem": "bdev", 00:32:18.371 "config": [ 00:32:18.371 { 00:32:18.371 "params": { 00:32:18.371 "io_mechanism": "io_uring", 00:32:18.371 "conserve_cpu": true, 00:32:18.371 "filename": "/dev/nvme0n1", 00:32:18.371 "name": "xnvme_bdev" 00:32:18.371 }, 00:32:18.371 "method": "bdev_xnvme_create" 00:32:18.371 }, 00:32:18.371 { 00:32:18.371 "method": "bdev_wait_for_examine" 00:32:18.371 } 00:32:18.371 ] 00:32:18.371 } 00:32:18.371 ] 00:32:18.371 } 00:32:18.371 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:18.371 fio-3.35 00:32:18.371 Starting 1 thread 00:32:24.960 00:32:24.960 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72842: Wed Nov 20 13:52:31 2024 00:32:24.960 write: IOPS=28.7k, BW=112MiB/s (118MB/s)(562MiB/5002msec); 0 zone resets 00:32:24.960 slat (usec): min=2, max=102, avg= 7.36, stdev= 3.91 00:32:24.960 clat (usec): min=743, max=3914, avg=1937.88, stdev=474.64 00:32:24.960 lat (usec): min=746, max=3939, avg=1945.23, stdev=477.15 00:32:24.960 clat percentiles (usec): 00:32:24.960 | 1.00th=[ 947], 5.00th=[ 1139], 10.00th=[ 1303], 20.00th=[ 1532], 00:32:24.960 | 30.00th=[ 1680], 40.00th=[ 1811], 50.00th=[ 1942], 60.00th=[ 2057], 00:32:24.960 | 70.00th=[ 2212], 80.00th=[ 2376], 90.00th=[ 2573], 95.00th=[ 2704], 00:32:24.960 | 99.00th=[ 2999], 99.50th=[ 3097], 99.90th=[ 3392], 99.95th=[ 3523], 00:32:24.960 | 99.99th=[ 3752] 00:32:24.960 bw ( KiB/s): min=109568, max=128512, per=100.00%, avg=117077.33, stdev=6685.47, samples=9 00:32:24.960 iops : min=27392, max=32128, avg=29269.33, stdev=1671.37, samples=9 00:32:24.960 lat (usec) : 750=0.01%, 1000=1.72% 00:32:24.960 lat (msec) : 2=53.44%, 4=44.84% 00:32:24.960 cpu : usr=54.01%, sys=42.53%, ctx=13, majf=0, minf=763 00:32:24.960 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:32:24.960 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:24.960 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:32:24.960 issued rwts: total=0,143744,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:24.960 latency : target=0, window=0, percentile=100.00%, depth=64 00:32:24.960 00:32:24.960 Run status group 0 (all jobs): 00:32:24.960 WRITE: bw=112MiB/s (118MB/s), 112MiB/s-112MiB/s (118MB/s-118MB/s), io=562MiB (589MB), run=5002-5002msec 00:32:25.901 ----------------------------------------------------- 00:32:25.901 Suppressions used: 00:32:25.901 count bytes template 00:32:25.901 1 11 /usr/src/fio/parse.c 00:32:25.901 1 8 libtcmalloc_minimal.so 00:32:25.901 1 904 libcrypto.so 00:32:25.901 ----------------------------------------------------- 00:32:25.901 00:32:25.901 00:32:25.901 real 0m15.353s 00:32:25.901 user 0m9.521s 00:32:25.901 sys 0m5.228s 00:32:25.901 ************************************ 00:32:25.901 END TEST xnvme_fio_plugin 00:32:25.901 ************************************ 00:32:25.901 13:52:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.901 13:52:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:32:25.901 13:52:33 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:32:25.901 13:52:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:25.901 13:52:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.901 13:52:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:25.901 ************************************ 00:32:25.901 START TEST xnvme_rpc 00:32:25.901 ************************************ 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72929 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72929 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72929 ']' 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:25.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:25.901 13:52:33 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:25.901 [2024-11-20 13:52:33.581306] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:25.901 [2024-11-20 13:52:33.581514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72929 ] 00:32:26.161 [2024-11-20 13:52:33.761743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:26.161 [2024-11-20 13:52:33.876597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:27.102 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:27.102 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:32:27.102 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:32:27.102 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.102 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 xnvme_bdev 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 13:52:34 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72929 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72929 ']' 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72929 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72929 00:32:27.362 killing process with pid 72929 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72929' 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72929 00:32:27.362 13:52:35 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72929 00:32:29.899 00:32:29.899 real 0m4.114s 00:32:29.899 user 0m4.264s 00:32:29.899 sys 0m0.540s 00:32:29.899 13:52:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:29.899 13:52:37 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:32:29.899 ************************************ 00:32:29.900 END TEST xnvme_rpc 00:32:29.900 ************************************ 00:32:30.158 13:52:37 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:32:30.158 13:52:37 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:30.158 13:52:37 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.158 13:52:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:30.158 ************************************ 00:32:30.158 START TEST xnvme_bdevperf 00:32:30.158 ************************************ 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:30.158 13:52:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:30.159 13:52:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:30.159 { 00:32:30.159 "subsystems": [ 00:32:30.159 { 00:32:30.159 "subsystem": "bdev", 00:32:30.159 "config": [ 00:32:30.159 { 00:32:30.159 "params": { 00:32:30.159 "io_mechanism": "io_uring_cmd", 00:32:30.159 "conserve_cpu": false, 00:32:30.159 "filename": "/dev/ng0n1", 00:32:30.159 "name": "xnvme_bdev" 00:32:30.159 }, 00:32:30.159 "method": "bdev_xnvme_create" 00:32:30.159 }, 00:32:30.159 { 00:32:30.159 "method": "bdev_wait_for_examine" 00:32:30.159 } 00:32:30.159 ] 00:32:30.159 } 00:32:30.159 ] 00:32:30.159 } 00:32:30.159 [2024-11-20 13:52:37.746327] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:30.159 [2024-11-20 13:52:37.746449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73013 ] 00:32:30.417 [2024-11-20 13:52:37.915137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.417 [2024-11-20 13:52:38.035645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.982 Running I/O for 5 seconds... 00:32:32.851 50016.00 IOPS, 195.38 MiB/s [2024-11-20T13:52:41.506Z] 46998.50 IOPS, 183.59 MiB/s [2024-11-20T13:52:42.441Z] 43283.00 IOPS, 169.07 MiB/s [2024-11-20T13:52:43.834Z] 40755.50 IOPS, 159.20 MiB/s 00:32:36.115 Latency(us) 00:32:36.115 [2024-11-20T13:52:43.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:36.115 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:32:36.115 xnvme_bdev : 5.00 39125.41 152.83 0.00 0.00 1630.02 336.27 123631.23 00:32:36.115 [2024-11-20T13:52:43.834Z] =================================================================================================================== 00:32:36.115 [2024-11-20T13:52:43.834Z] Total : 39125.41 152.83 0.00 0.00 1630.02 336.27 123631.23 00:32:37.053 13:52:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:37.053 13:52:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:32:37.053 13:52:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:37.053 13:52:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:37.053 13:52:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:37.053 { 00:32:37.053 "subsystems": [ 00:32:37.053 { 00:32:37.053 "subsystem": "bdev", 00:32:37.053 "config": [ 00:32:37.053 { 00:32:37.053 "params": { 00:32:37.053 "io_mechanism": "io_uring_cmd", 00:32:37.053 "conserve_cpu": false, 00:32:37.053 "filename": "/dev/ng0n1", 00:32:37.053 "name": "xnvme_bdev" 00:32:37.053 }, 00:32:37.053 "method": "bdev_xnvme_create" 00:32:37.053 }, 00:32:37.053 { 00:32:37.053 "method": "bdev_wait_for_examine" 00:32:37.053 } 00:32:37.053 ] 00:32:37.053 } 00:32:37.053 ] 00:32:37.053 } 00:32:37.053 [2024-11-20 13:52:44.645048] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:37.053 [2024-11-20 13:52:44.645646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73093 ] 00:32:37.313 [2024-11-20 13:52:44.818261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.313 [2024-11-20 13:52:44.938234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.575 Running I/O for 5 seconds... 00:32:39.911 27456.00 IOPS, 107.25 MiB/s [2024-11-20T13:52:48.568Z] 26528.00 IOPS, 103.62 MiB/s [2024-11-20T13:52:49.504Z] 25770.67 IOPS, 100.67 MiB/s [2024-11-20T13:52:50.441Z] 25520.00 IOPS, 99.69 MiB/s 00:32:42.722 Latency(us) 00:32:42.722 [2024-11-20T13:52:50.441Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:42.722 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:32:42.722 xnvme_bdev : 5.00 25922.68 101.26 0.00 0.00 2458.86 1044.57 8757.21 00:32:42.722 [2024-11-20T13:52:50.441Z] =================================================================================================================== 00:32:42.722 [2024-11-20T13:52:50.441Z] Total : 25922.68 101.26 0.00 0.00 2458.86 1044.57 8757.21 00:32:44.102 13:52:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:44.102 13:52:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:32:44.102 13:52:51 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:44.102 13:52:51 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:44.102 13:52:51 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:44.102 { 00:32:44.102 "subsystems": [ 00:32:44.102 { 00:32:44.102 "subsystem": "bdev", 00:32:44.102 "config": [ 00:32:44.102 { 00:32:44.102 "params": { 00:32:44.102 "io_mechanism": "io_uring_cmd", 00:32:44.102 "conserve_cpu": false, 00:32:44.102 "filename": "/dev/ng0n1", 00:32:44.102 "name": "xnvme_bdev" 00:32:44.102 }, 00:32:44.102 "method": "bdev_xnvme_create" 00:32:44.102 }, 00:32:44.102 { 00:32:44.102 "method": "bdev_wait_for_examine" 00:32:44.102 } 00:32:44.102 ] 00:32:44.102 } 00:32:44.102 ] 00:32:44.102 } 00:32:44.102 [2024-11-20 13:52:51.551781] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:44.102 [2024-11-20 13:52:51.551994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73172 ] 00:32:44.102 [2024-11-20 13:52:51.718661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:44.361 [2024-11-20 13:52:51.838370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:44.619 Running I/O for 5 seconds... 00:32:46.495 76608.00 IOPS, 299.25 MiB/s [2024-11-20T13:52:55.596Z] 76640.00 IOPS, 299.38 MiB/s [2024-11-20T13:52:56.534Z] 73728.00 IOPS, 288.00 MiB/s [2024-11-20T13:52:57.476Z] 73952.00 IOPS, 288.88 MiB/s 00:32:49.757 Latency(us) 00:32:49.757 [2024-11-20T13:52:57.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:49.757 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:32:49.757 xnvme_bdev : 5.00 73484.05 287.05 0.00 0.00 867.88 465.05 2675.81 00:32:49.757 [2024-11-20T13:52:57.476Z] =================================================================================================================== 00:32:49.757 [2024-11-20T13:52:57.476Z] Total : 73484.05 287.05 0.00 0.00 867.88 465.05 2675.81 00:32:51.139 13:52:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:51.139 13:52:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:32:51.139 13:52:58 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:32:51.139 13:52:58 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:32:51.139 13:52:58 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:51.139 { 00:32:51.139 "subsystems": [ 00:32:51.139 { 00:32:51.139 "subsystem": "bdev", 00:32:51.139 "config": [ 00:32:51.139 { 00:32:51.139 "params": { 00:32:51.139 "io_mechanism": "io_uring_cmd", 00:32:51.139 "conserve_cpu": false, 00:32:51.139 "filename": "/dev/ng0n1", 00:32:51.139 "name": "xnvme_bdev" 00:32:51.139 }, 00:32:51.140 "method": "bdev_xnvme_create" 00:32:51.140 }, 00:32:51.140 { 00:32:51.140 "method": "bdev_wait_for_examine" 00:32:51.140 } 00:32:51.140 ] 00:32:51.140 } 00:32:51.140 ] 00:32:51.140 } 00:32:51.140 [2024-11-20 13:52:58.534708] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:32:51.140 [2024-11-20 13:52:58.534976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73248 ] 00:32:51.140 [2024-11-20 13:52:58.719229] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.399 [2024-11-20 13:52:58.863346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.657 Running I/O for 5 seconds... 00:32:53.974 37678.00 IOPS, 147.18 MiB/s [2024-11-20T13:53:02.631Z] 33690.50 IOPS, 131.60 MiB/s [2024-11-20T13:53:03.569Z] 31897.00 IOPS, 124.60 MiB/s [2024-11-20T13:53:04.507Z] 30547.50 IOPS, 119.33 MiB/s [2024-11-20T13:53:04.507Z] 29808.00 IOPS, 116.44 MiB/s 00:32:56.788 Latency(us) 00:32:56.788 [2024-11-20T13:53:04.507Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.788 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:32:56.788 xnvme_bdev : 5.00 29793.48 116.38 0.00 0.00 2138.08 167.24 14366.41 00:32:56.788 [2024-11-20T13:53:04.507Z] =================================================================================================================== 00:32:56.788 [2024-11-20T13:53:04.507Z] Total : 29793.48 116.38 0.00 0.00 2138.08 167.24 14366.41 00:32:58.169 ************************************ 00:32:58.169 END TEST xnvme_bdevperf 00:32:58.169 ************************************ 00:32:58.169 00:32:58.169 real 0m27.838s 00:32:58.169 user 0m16.444s 00:32:58.169 sys 0m10.958s 00:32:58.169 13:53:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.169 13:53:05 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:32:58.169 13:53:05 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:32:58.169 13:53:05 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:58.169 13:53:05 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:58.169 13:53:05 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:32:58.169 ************************************ 00:32:58.169 START TEST xnvme_fio_plugin 00:32:58.169 ************************************ 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:58.169 { 00:32:58.169 "subsystems": [ 00:32:58.169 { 00:32:58.169 "subsystem": "bdev", 00:32:58.169 "config": [ 00:32:58.169 { 00:32:58.169 "params": { 00:32:58.169 "io_mechanism": "io_uring_cmd", 00:32:58.169 "conserve_cpu": false, 00:32:58.169 "filename": "/dev/ng0n1", 00:32:58.169 "name": "xnvme_bdev" 00:32:58.169 }, 00:32:58.169 "method": "bdev_xnvme_create" 00:32:58.169 }, 00:32:58.169 { 00:32:58.169 "method": "bdev_wait_for_examine" 00:32:58.169 } 00:32:58.169 ] 00:32:58.169 } 00:32:58.169 ] 00:32:58.169 } 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:58.169 13:53:05 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:32:58.169 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:32:58.169 fio-3.35 00:32:58.170 Starting 1 thread 00:33:04.735 00:33:04.735 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73371: Wed Nov 20 13:53:11 2024 00:33:04.735 read: IOPS=34.6k, BW=135MiB/s (142MB/s)(676MiB/5002msec) 00:33:04.735 slat (nsec): min=2642, max=97932, avg=6032.99, stdev=2899.01 00:33:04.735 clat (usec): min=686, max=4031, avg=1616.18, stdev=406.40 00:33:04.735 lat (usec): min=689, max=4066, avg=1622.21, stdev=408.43 00:33:04.735 clat percentiles (usec): 00:33:04.735 | 1.00th=[ 889], 5.00th=[ 971], 10.00th=[ 1029], 20.00th=[ 1172], 00:33:04.735 | 30.00th=[ 1401], 40.00th=[ 1582], 50.00th=[ 1680], 60.00th=[ 1762], 00:33:04.735 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2073], 95.00th=[ 2212], 00:33:04.735 | 99.00th=[ 2573], 99.50th=[ 2868], 99.90th=[ 3556], 99.95th=[ 3720], 00:33:04.735 | 99.99th=[ 3949] 00:33:04.735 bw ( KiB/s): min=121856, max=163840, per=100.00%, avg=141141.33, stdev=12338.56, samples=9 00:33:04.735 iops : min=30464, max=40960, avg=35285.33, stdev=3084.64, samples=9 00:33:04.735 lat (usec) : 750=0.04%, 1000=7.34% 00:33:04.735 lat (msec) : 2=77.82%, 4=14.81%, 10=0.01% 00:33:04.735 cpu : usr=41.35%, sys=57.63%, ctx=10, majf=0, minf=762 00:33:04.735 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:33:04.735 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:04.735 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:33:04.735 issued rwts: total=172992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:04.735 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:04.735 00:33:04.735 Run status group 0 (all jobs): 00:33:04.735 READ: bw=135MiB/s (142MB/s), 135MiB/s-135MiB/s (142MB/s-142MB/s), io=676MiB (709MB), run=5002-5002msec 00:33:05.304 ----------------------------------------------------- 00:33:05.304 Suppressions used: 00:33:05.304 count bytes template 00:33:05.304 1 11 /usr/src/fio/parse.c 00:33:05.304 1 8 libtcmalloc_minimal.so 00:33:05.304 1 904 libcrypto.so 00:33:05.304 ----------------------------------------------------- 00:33:05.304 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:05.304 13:53:12 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:33:05.304 13:53:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:05.304 13:53:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:05.304 13:53:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:33:05.304 13:53:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:05.304 13:53:13 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:05.563 { 00:33:05.564 "subsystems": [ 00:33:05.564 { 00:33:05.564 "subsystem": "bdev", 00:33:05.564 "config": [ 00:33:05.564 { 00:33:05.564 "params": { 00:33:05.564 "io_mechanism": "io_uring_cmd", 00:33:05.564 "conserve_cpu": false, 00:33:05.564 "filename": "/dev/ng0n1", 00:33:05.564 "name": "xnvme_bdev" 00:33:05.564 }, 00:33:05.564 "method": "bdev_xnvme_create" 00:33:05.564 }, 00:33:05.564 { 00:33:05.564 "method": "bdev_wait_for_examine" 00:33:05.564 } 00:33:05.564 ] 00:33:05.564 } 00:33:05.564 ] 00:33:05.564 } 00:33:05.564 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:33:05.564 fio-3.35 00:33:05.564 Starting 1 thread 00:33:12.135 00:33:12.135 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73463: Wed Nov 20 13:53:19 2024 00:33:12.135 write: IOPS=32.5k, BW=127MiB/s (133MB/s)(635MiB/5002msec); 0 zone resets 00:33:12.135 slat (usec): min=2, max=237, avg= 6.95, stdev= 3.74 00:33:12.135 clat (usec): min=269, max=6005, avg=1694.53, stdev=461.72 00:33:12.135 lat (usec): min=274, max=6013, avg=1701.49, stdev=464.03 00:33:12.135 clat percentiles (usec): 00:33:12.135 | 1.00th=[ 906], 5.00th=[ 1004], 10.00th=[ 1074], 20.00th=[ 1188], 00:33:12.135 | 30.00th=[ 1369], 40.00th=[ 1565], 50.00th=[ 1729], 60.00th=[ 1860], 00:33:12.135 | 70.00th=[ 1975], 80.00th=[ 2114], 90.00th=[ 2278], 95.00th=[ 2442], 00:33:12.135 | 99.00th=[ 2671], 99.50th=[ 2769], 99.90th=[ 3032], 99.95th=[ 3130], 00:33:12.135 | 99.99th=[ 3228] 00:33:12.135 bw ( KiB/s): min=108904, max=168960, per=100.00%, avg=132765.33, stdev=21968.08, samples=9 00:33:12.135 iops : min=27226, max=42240, avg=33191.33, stdev=5492.02, samples=9 00:33:12.135 lat (usec) : 500=0.01%, 750=0.05%, 1000=4.60% 00:33:12.135 lat (msec) : 2=67.24%, 4=28.11%, 10=0.01% 00:33:12.135 cpu : usr=44.51%, sys=54.19%, ctx=47, majf=0, minf=763 00:33:12.135 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:33:12.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:12.135 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:33:12.135 issued rwts: total=0,162609,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:12.135 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:12.135 00:33:12.135 Run status group 0 (all jobs): 00:33:12.135 WRITE: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=635MiB (666MB), run=5002-5002msec 00:33:13.074 ----------------------------------------------------- 00:33:13.074 Suppressions used: 00:33:13.074 count bytes template 00:33:13.074 1 11 /usr/src/fio/parse.c 00:33:13.074 1 8 libtcmalloc_minimal.so 00:33:13.074 1 904 libcrypto.so 00:33:13.074 ----------------------------------------------------- 00:33:13.074 00:33:13.074 00:33:13.074 real 0m14.923s 00:33:13.074 user 0m8.222s 00:33:13.074 sys 0m6.309s 00:33:13.074 13:53:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:13.074 13:53:20 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:13.074 ************************************ 00:33:13.074 END TEST xnvme_fio_plugin 00:33:13.074 ************************************ 00:33:13.074 13:53:20 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:33:13.074 13:53:20 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:33:13.074 13:53:20 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:33:13.074 13:53:20 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:33:13.074 13:53:20 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:13.074 13:53:20 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:13.074 13:53:20 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:13.074 ************************************ 00:33:13.074 START TEST xnvme_rpc 00:33:13.074 ************************************ 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73554 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73554 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73554 ']' 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:13.074 13:53:20 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:13.074 [2024-11-20 13:53:20.640504] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:33:13.074 [2024-11-20 13:53:20.640725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73554 ] 00:33:13.333 [2024-11-20 13:53:20.818789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:13.333 [2024-11-20 13:53:20.951857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.271 xnvme_bdev 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.271 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:33:14.597 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:33:14.597 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:14.597 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.597 13:53:21 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.597 13:53:21 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73554 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73554 ']' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73554 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73554 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:14.597 killing process with pid 73554 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73554' 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73554 00:33:14.597 13:53:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73554 00:33:17.144 ************************************ 00:33:17.144 END TEST xnvme_rpc 00:33:17.144 ************************************ 00:33:17.144 00:33:17.144 real 0m4.313s 00:33:17.144 user 0m4.429s 00:33:17.144 sys 0m0.515s 00:33:17.144 13:53:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.144 13:53:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 13:53:24 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:33:17.404 13:53:24 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:17.404 13:53:24 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.404 13:53:24 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 ************************************ 00:33:17.404 START TEST xnvme_bdevperf 00:33:17.404 ************************************ 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:17.404 13:53:24 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:17.404 { 00:33:17.404 "subsystems": [ 00:33:17.404 { 00:33:17.404 "subsystem": "bdev", 00:33:17.404 "config": [ 00:33:17.404 { 00:33:17.404 "params": { 00:33:17.404 "io_mechanism": "io_uring_cmd", 00:33:17.404 "conserve_cpu": true, 00:33:17.404 "filename": "/dev/ng0n1", 00:33:17.404 "name": "xnvme_bdev" 00:33:17.404 }, 00:33:17.404 "method": "bdev_xnvme_create" 00:33:17.404 }, 00:33:17.404 { 00:33:17.404 "method": "bdev_wait_for_examine" 00:33:17.404 } 00:33:17.404 ] 00:33:17.404 } 00:33:17.404 ] 00:33:17.404 } 00:33:17.404 [2024-11-20 13:53:25.026682] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:33:17.404 [2024-11-20 13:53:25.026873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73638 ] 00:33:17.663 [2024-11-20 13:53:25.209270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.663 [2024-11-20 13:53:25.342284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:18.230 Running I/O for 5 seconds... 00:33:20.099 44478.00 IOPS, 173.74 MiB/s [2024-11-20T13:53:28.799Z] 45599.00 IOPS, 178.12 MiB/s [2024-11-20T13:53:29.735Z] 43903.33 IOPS, 171.50 MiB/s [2024-11-20T13:53:31.113Z] 42831.50 IOPS, 167.31 MiB/s 00:33:23.394 Latency(us) 00:33:23.394 [2024-11-20T13:53:31.113Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.394 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:33:23.394 xnvme_bdev : 5.00 41252.03 161.14 0.00 0.00 1546.27 633.18 9444.05 00:33:23.394 [2024-11-20T13:53:31.113Z] =================================================================================================================== 00:33:23.394 [2024-11-20T13:53:31.113Z] Total : 41252.03 161.14 0.00 0.00 1546.27 633.18 9444.05 00:33:24.333 13:53:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:24.333 13:53:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:24.333 13:53:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:33:24.334 13:53:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:24.334 13:53:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:24.334 { 00:33:24.334 "subsystems": [ 00:33:24.334 { 00:33:24.334 "subsystem": "bdev", 00:33:24.334 "config": [ 00:33:24.334 { 00:33:24.334 "params": { 00:33:24.334 "io_mechanism": "io_uring_cmd", 00:33:24.334 "conserve_cpu": true, 00:33:24.334 "filename": "/dev/ng0n1", 00:33:24.334 "name": "xnvme_bdev" 00:33:24.334 }, 00:33:24.334 "method": "bdev_xnvme_create" 00:33:24.334 }, 00:33:24.334 { 00:33:24.334 "method": "bdev_wait_for_examine" 00:33:24.334 } 00:33:24.334 ] 00:33:24.334 } 00:33:24.334 ] 00:33:24.334 } 00:33:24.595 [2024-11-20 13:53:32.080963] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:33:24.595 [2024-11-20 13:53:32.081121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73719 ] 00:33:24.595 [2024-11-20 13:53:32.264701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:24.854 [2024-11-20 13:53:32.399982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.114 Running I/O for 5 seconds... 00:33:27.429 32512.00 IOPS, 127.00 MiB/s [2024-11-20T13:53:36.157Z] 32096.00 IOPS, 125.38 MiB/s [2024-11-20T13:53:37.091Z] 31594.67 IOPS, 123.42 MiB/s [2024-11-20T13:53:38.028Z] 31888.00 IOPS, 124.56 MiB/s [2024-11-20T13:53:38.028Z] 31718.40 IOPS, 123.90 MiB/s 00:33:30.309 Latency(us) 00:33:30.309 [2024-11-20T13:53:38.028Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:30.309 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:33:30.309 xnvme_bdev : 5.01 31701.48 123.83 0.00 0.00 2011.58 783.43 8413.79 00:33:30.309 [2024-11-20T13:53:38.028Z] =================================================================================================================== 00:33:30.309 [2024-11-20T13:53:38.028Z] Total : 31701.48 123.83 0.00 0.00 2011.58 783.43 8413.79 00:33:31.697 13:53:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:31.697 13:53:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:33:31.697 13:53:39 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:31.697 13:53:39 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:31.697 13:53:39 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:31.697 { 00:33:31.697 "subsystems": [ 00:33:31.697 { 00:33:31.697 "subsystem": "bdev", 00:33:31.697 "config": [ 00:33:31.697 { 00:33:31.697 "params": { 00:33:31.697 "io_mechanism": "io_uring_cmd", 00:33:31.697 "conserve_cpu": true, 00:33:31.697 "filename": "/dev/ng0n1", 00:33:31.697 "name": "xnvme_bdev" 00:33:31.697 }, 00:33:31.697 "method": "bdev_xnvme_create" 00:33:31.697 }, 00:33:31.697 { 00:33:31.697 "method": "bdev_wait_for_examine" 00:33:31.697 } 00:33:31.697 ] 00:33:31.697 } 00:33:31.697 ] 00:33:31.697 } 00:33:31.697 [2024-11-20 13:53:39.165725] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:33:31.697 [2024-11-20 13:53:39.165939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73799 ] 00:33:31.697 [2024-11-20 13:53:39.343812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:31.956 [2024-11-20 13:53:39.468375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.216 Running I/O for 5 seconds... 00:33:34.529 70016.00 IOPS, 273.50 MiB/s [2024-11-20T13:53:43.184Z] 70336.00 IOPS, 274.75 MiB/s [2024-11-20T13:53:44.122Z] 70464.00 IOPS, 275.25 MiB/s [2024-11-20T13:53:45.058Z] 75280.00 IOPS, 294.06 MiB/s 00:33:37.339 Latency(us) 00:33:37.339 [2024-11-20T13:53:45.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:37.339 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:33:37.339 xnvme_bdev : 5.00 78175.77 305.37 0.00 0.00 815.42 302.28 4722.03 00:33:37.339 [2024-11-20T13:53:45.058Z] =================================================================================================================== 00:33:37.339 [2024-11-20T13:53:45.058Z] Total : 78175.77 305.37 0.00 0.00 815.42 302.28 4722.03 00:33:38.716 13:53:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:38.716 13:53:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:33:38.716 13:53:46 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:33:38.716 13:53:46 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:33:38.716 13:53:46 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.717 { 00:33:38.717 "subsystems": [ 00:33:38.717 { 00:33:38.717 "subsystem": "bdev", 00:33:38.717 "config": [ 00:33:38.717 { 00:33:38.717 "params": { 00:33:38.717 "io_mechanism": "io_uring_cmd", 00:33:38.717 "conserve_cpu": true, 00:33:38.717 "filename": "/dev/ng0n1", 00:33:38.717 "name": "xnvme_bdev" 00:33:38.717 }, 00:33:38.717 "method": "bdev_xnvme_create" 00:33:38.717 }, 00:33:38.717 { 00:33:38.717 "method": "bdev_wait_for_examine" 00:33:38.717 } 00:33:38.717 ] 00:33:38.717 } 00:33:38.717 ] 00:33:38.717 } 00:33:38.717 [2024-11-20 13:53:46.164392] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:33:38.717 [2024-11-20 13:53:46.164523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73880 ] 00:33:38.717 [2024-11-20 13:53:46.343364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.976 [2024-11-20 13:53:46.467888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:39.234 Running I/O for 5 seconds... 00:33:41.536 68711.00 IOPS, 268.40 MiB/s [2024-11-20T13:53:50.188Z] 64330.00 IOPS, 251.29 MiB/s [2024-11-20T13:53:51.120Z] 58491.33 IOPS, 228.48 MiB/s [2024-11-20T13:53:52.056Z] 53865.50 IOPS, 210.41 MiB/s [2024-11-20T13:53:52.056Z] 51512.20 IOPS, 201.22 MiB/s 00:33:44.337 Latency(us) 00:33:44.337 [2024-11-20T13:53:52.056Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:44.337 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:33:44.337 xnvme_bdev : 5.01 51425.74 200.88 0.00 0.00 1238.72 60.81 12821.02 00:33:44.337 [2024-11-20T13:53:52.056Z] =================================================================================================================== 00:33:44.337 [2024-11-20T13:53:52.056Z] Total : 51425.74 200.88 0.00 0.00 1238.72 60.81 12821.02 00:33:45.740 00:33:45.740 real 0m28.444s 00:33:45.740 user 0m19.253s 00:33:45.740 sys 0m7.359s 00:33:45.740 13:53:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.740 ************************************ 00:33:45.740 END TEST xnvme_bdevperf 00:33:45.740 ************************************ 00:33:45.740 13:53:53 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:45.740 13:53:53 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:33:45.740 13:53:53 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:45.740 13:53:53 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.740 13:53:53 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:33:45.740 ************************************ 00:33:45.740 START TEST xnvme_fio_plugin 00:33:45.740 ************************************ 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:33:45.740 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:45.997 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:45.997 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:45.997 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:33:45.997 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:45.997 13:53:53 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:45.997 { 00:33:45.997 "subsystems": [ 00:33:45.997 { 00:33:45.997 "subsystem": "bdev", 00:33:45.997 "config": [ 00:33:45.997 { 00:33:45.997 "params": { 00:33:45.997 "io_mechanism": "io_uring_cmd", 00:33:45.997 "conserve_cpu": true, 00:33:45.997 "filename": "/dev/ng0n1", 00:33:45.997 "name": "xnvme_bdev" 00:33:45.997 }, 00:33:45.997 "method": "bdev_xnvme_create" 00:33:45.997 }, 00:33:45.997 { 00:33:45.998 "method": "bdev_wait_for_examine" 00:33:45.998 } 00:33:45.998 ] 00:33:45.998 } 00:33:45.998 ] 00:33:45.998 } 00:33:45.998 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:33:45.998 fio-3.35 00:33:45.998 Starting 1 thread 00:33:52.563 00:33:52.563 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74004: Wed Nov 20 13:53:59 2024 00:33:52.563 read: IOPS=29.7k, BW=116MiB/s (121MB/s)(579MiB/5001msec) 00:33:52.563 slat (usec): min=2, max=303, avg= 7.70, stdev= 4.39 00:33:52.563 clat (usec): min=691, max=6405, avg=1854.99, stdev=611.36 00:33:52.563 lat (usec): min=694, max=6417, avg=1862.69, stdev=614.42 00:33:52.563 clat percentiles (usec): 00:33:52.563 | 1.00th=[ 807], 5.00th=[ 881], 10.00th=[ 930], 20.00th=[ 1037], 00:33:52.563 | 30.00th=[ 1532], 40.00th=[ 1893], 50.00th=[ 2040], 60.00th=[ 2147], 00:33:52.563 | 70.00th=[ 2278], 80.00th=[ 2409], 90.00th=[ 2540], 95.00th=[ 2638], 00:33:52.563 | 99.00th=[ 2802], 99.50th=[ 2868], 99.90th=[ 3130], 99.95th=[ 3228], 00:33:52.563 | 99.99th=[ 6259] 00:33:52.563 bw ( KiB/s): min=94208, max=166400, per=92.47%, avg=109681.78, stdev=23270.35, samples=9 00:33:52.563 iops : min=23552, max=41600, avg=27420.44, stdev=5817.59, samples=9 00:33:52.563 lat (usec) : 750=0.22%, 1000=16.46% 00:33:52.563 lat (msec) : 2=30.39%, 4=52.90%, 10=0.04% 00:33:52.563 cpu : usr=52.66%, sys=44.24%, ctx=66, majf=0, minf=762 00:33:52.563 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:33:52.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:52.563 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:33:52.563 issued rwts: total=148288,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:52.563 latency : target=0, window=0, percentile=100.00%, depth=64 00:33:52.563 00:33:52.563 Run status group 0 (all jobs): 00:33:52.563 READ: bw=116MiB/s (121MB/s), 116MiB/s-116MiB/s (121MB/s-121MB/s), io=579MiB (607MB), run=5001-5001msec 00:33:53.554 ----------------------------------------------------- 00:33:53.554 Suppressions used: 00:33:53.554 count bytes template 00:33:53.554 1 11 /usr/src/fio/parse.c 00:33:53.554 1 8 libtcmalloc_minimal.so 00:33:53.554 1 904 libcrypto.so 00:33:53.554 ----------------------------------------------------- 00:33:53.554 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:53.554 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:33:53.555 13:54:01 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:33:53.555 { 00:33:53.555 "subsystems": [ 00:33:53.555 { 00:33:53.555 "subsystem": "bdev", 00:33:53.555 "config": [ 00:33:53.555 { 00:33:53.555 "params": { 00:33:53.555 "io_mechanism": "io_uring_cmd", 00:33:53.555 "conserve_cpu": true, 00:33:53.555 "filename": "/dev/ng0n1", 00:33:53.555 "name": "xnvme_bdev" 00:33:53.555 }, 00:33:53.555 "method": "bdev_xnvme_create" 00:33:53.555 }, 00:33:53.555 { 00:33:53.555 "method": "bdev_wait_for_examine" 00:33:53.555 } 00:33:53.555 ] 00:33:53.555 } 00:33:53.555 ] 00:33:53.555 } 00:33:53.555 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:33:53.555 fio-3.35 00:33:53.555 Starting 1 thread 00:34:00.123 00:34:00.123 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=74101: Wed Nov 20 13:54:07 2024 00:34:00.123 write: IOPS=22.7k, BW=88.7MiB/s (93.1MB/s)(444MiB/5001msec); 0 zone resets 00:34:00.123 slat (usec): min=2, max=495, avg= 6.32, stdev= 5.16 00:34:00.123 clat (usec): min=70, max=28760, avg=2597.05, stdev=3382.83 00:34:00.123 lat (usec): min=73, max=28764, avg=2603.37, stdev=3382.73 00:34:00.123 clat percentiles (usec): 00:34:00.123 | 1.00th=[ 176], 5.00th=[ 898], 10.00th=[ 1004], 20.00th=[ 1106], 00:34:00.123 | 30.00th=[ 1221], 40.00th=[ 1434], 50.00th=[ 1647], 60.00th=[ 1860], 00:34:00.123 | 70.00th=[ 2089], 80.00th=[ 2343], 90.00th=[ 5014], 95.00th=[10945], 00:34:00.123 | 99.00th=[17695], 99.50th=[20841], 99.90th=[25560], 99.95th=[27132], 00:34:00.123 | 99.99th=[28181] 00:34:00.123 bw ( KiB/s): min=22672, max=160768, per=91.12%, avg=82801.78, stdev=51143.93, samples=9 00:34:00.123 iops : min= 5668, max=40192, avg=20700.44, stdev=12785.98, samples=9 00:34:00.123 lat (usec) : 100=0.07%, 250=2.06%, 500=1.85%, 750=0.52%, 1000=5.22% 00:34:00.123 lat (msec) : 2=56.53%, 4=23.30%, 10=4.49%, 20=5.33%, 50=0.63% 00:34:00.123 cpu : usr=67.02%, sys=27.38%, ctx=52, majf=0, minf=763 00:34:00.123 IO depths : 1=1.3%, 2=2.6%, 4=5.2%, 8=10.5%, 16=21.2%, 32=53.7%, >=64=5.4% 00:34:00.123 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:00.123 complete : 0=0.0%, 4=97.8%, 8=0.6%, 16=0.3%, 32=0.1%, 64=1.3%, >=64=0.0% 00:34:00.123 issued rwts: total=0,113613,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:00.123 latency : target=0, window=0, percentile=100.00%, depth=64 00:34:00.123 00:34:00.123 Run status group 0 (all jobs): 00:34:00.123 WRITE: bw=88.7MiB/s (93.1MB/s), 88.7MiB/s-88.7MiB/s (93.1MB/s-93.1MB/s), io=444MiB (465MB), run=5001-5001msec 00:34:01.060 ----------------------------------------------------- 00:34:01.060 Suppressions used: 00:34:01.060 count bytes template 00:34:01.060 1 11 /usr/src/fio/parse.c 00:34:01.060 1 8 libtcmalloc_minimal.so 00:34:01.060 1 904 libcrypto.so 00:34:01.060 ----------------------------------------------------- 00:34:01.060 00:34:01.060 00:34:01.060 real 0m15.139s 00:34:01.060 user 0m10.115s 00:34:01.060 sys 0m4.337s 00:34:01.060 13:54:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.060 13:54:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:34:01.060 ************************************ 00:34:01.060 END TEST xnvme_fio_plugin 00:34:01.060 ************************************ 00:34:01.060 13:54:08 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73554 00:34:01.060 13:54:08 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73554 ']' 00:34:01.060 Process with pid 73554 is not found 00:34:01.060 13:54:08 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73554 00:34:01.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73554) - No such process 00:34:01.060 13:54:08 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73554 is not found' 00:34:01.060 ************************************ 00:34:01.060 END TEST nvme_xnvme 00:34:01.060 ************************************ 00:34:01.060 00:34:01.060 real 3m57.886s 00:34:01.060 user 2m20.198s 00:34:01.060 sys 1m22.004s 00:34:01.060 13:54:08 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.060 13:54:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:01.060 13:54:08 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:34:01.060 13:54:08 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:01.060 13:54:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.060 13:54:08 -- common/autotest_common.sh@10 -- # set +x 00:34:01.060 ************************************ 00:34:01.060 START TEST blockdev_xnvme 00:34:01.060 ************************************ 00:34:01.060 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:34:01.319 * Looking for test storage... 00:34:01.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:34:01.319 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:01.319 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:01.319 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lcov --version 00:34:01.319 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.319 13:54:08 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.320 13:54:08 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.320 13:54:08 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:01.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.320 --rc genhtml_branch_coverage=1 00:34:01.320 --rc genhtml_function_coverage=1 00:34:01.320 --rc genhtml_legend=1 00:34:01.320 --rc geninfo_all_blocks=1 00:34:01.320 --rc geninfo_unexecuted_blocks=1 00:34:01.320 00:34:01.320 ' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:01.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.320 --rc genhtml_branch_coverage=1 00:34:01.320 --rc genhtml_function_coverage=1 00:34:01.320 --rc genhtml_legend=1 00:34:01.320 --rc geninfo_all_blocks=1 00:34:01.320 --rc geninfo_unexecuted_blocks=1 00:34:01.320 00:34:01.320 ' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:01.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.320 --rc genhtml_branch_coverage=1 00:34:01.320 --rc genhtml_function_coverage=1 00:34:01.320 --rc genhtml_legend=1 00:34:01.320 --rc geninfo_all_blocks=1 00:34:01.320 --rc geninfo_unexecuted_blocks=1 00:34:01.320 00:34:01.320 ' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:01.320 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.320 --rc genhtml_branch_coverage=1 00:34:01.320 --rc genhtml_function_coverage=1 00:34:01.320 --rc genhtml_legend=1 00:34:01.320 --rc geninfo_all_blocks=1 00:34:01.320 --rc geninfo_unexecuted_blocks=1 00:34:01.320 00:34:01.320 ' 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=74240 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 74240 00:34:01.320 13:54:08 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 74240 ']' 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:01.320 13:54:08 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:01.579 [2024-11-20 13:54:09.056297] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:01.579 [2024-11-20 13:54:09.056643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74240 ] 00:34:01.579 [2024-11-20 13:54:09.242407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:01.838 [2024-11-20 13:54:09.365368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:02.778 13:54:10 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:02.778 13:54:10 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:34:02.778 13:54:10 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:34:02.778 13:54:10 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:34:02.778 13:54:10 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:34:02.778 13:54:10 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:34:02.778 13:54:10 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:34:03.348 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:34:03.918 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:34:03.918 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:34:03.918 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:34:03.918 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local nvme bdf 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n2 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n3 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1c1n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1c1n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1c1n1/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme2n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1661 -- # is_block_zoned nvme3n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:34:03.918 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.918 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:03.919 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:34:03.919 nvme0n1 00:34:03.919 nvme0n2 00:34:03.919 nvme0n3 00:34:03.919 nvme1n1 00:34:03.919 nvme2n1 00:34:03.919 nvme3n1 00:34:03.919 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.919 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:34:03.919 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:04.182 13:54:11 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.182 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:34:04.183 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "848ec16e-57b6-447c-bec5-91744203f087"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "848ec16e-57b6-447c-bec5-91744203f087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "de0ea668-808e-4dd7-b502-4758dfd2ac50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "de0ea668-808e-4dd7-b502-4758dfd2ac50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1152afff-995c-4af8-9aea-8c5783fd1687"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1152afff-995c-4af8-9aea-8c5783fd1687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b04da2e5-91a8-43a7-b457-ff7b4cf37a44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b04da2e5-91a8-43a7-b457-ff7b4cf37a44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:34:04.183 ' "9080c7a5-2d73-4ee2-947c-5c592b7c6508"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9080c7a5-2d73-4ee2-947c-5c592b7c6508",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "30182b15-c8b0-4773-927e-a6aae1a4fdb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "30182b15-c8b0-4773-927e-a6aae1a4fdb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:34:04.183 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:34:04.183 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:34:04.183 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:34:04.183 13:54:11 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 74240 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 74240 ']' 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 74240 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74240 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74240' 00:34:04.183 killing process with pid 74240 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 74240 00:34:04.183 13:54:11 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 74240 00:34:06.723 13:54:14 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:06.723 13:54:14 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:34:06.723 13:54:14 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:34:06.723 13:54:14 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.723 13:54:14 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:06.723 ************************************ 00:34:06.723 START TEST bdev_hello_world 00:34:06.723 ************************************ 00:34:06.723 13:54:14 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:34:06.723 [2024-11-20 13:54:14.404528] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:06.723 [2024-11-20 13:54:14.404667] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74531 ] 00:34:06.982 [2024-11-20 13:54:14.584486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:07.242 [2024-11-20 13:54:14.706757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:07.501 [2024-11-20 13:54:15.141503] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:34:07.502 [2024-11-20 13:54:15.141645] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:34:07.502 [2024-11-20 13:54:15.141669] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:34:07.502 [2024-11-20 13:54:15.143833] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:34:07.502 [2024-11-20 13:54:15.144198] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:34:07.502 [2024-11-20 13:54:15.144216] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:34:07.502 [2024-11-20 13:54:15.144432] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:34:07.502 00:34:07.502 [2024-11-20 13:54:15.144452] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:34:08.882 00:34:08.882 real 0m1.990s 00:34:08.882 user 0m1.641s 00:34:08.882 sys 0m0.225s 00:34:08.882 13:54:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.882 13:54:16 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:34:08.882 ************************************ 00:34:08.882 END TEST bdev_hello_world 00:34:08.882 ************************************ 00:34:08.882 13:54:16 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:34:08.882 13:54:16 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.882 13:54:16 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.882 13:54:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:08.882 ************************************ 00:34:08.882 START TEST bdev_bounds 00:34:08.882 ************************************ 00:34:08.882 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:34:08.882 13:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74574 00:34:08.882 13:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:08.882 13:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:34:08.882 Process bdevio pid: 74574 00:34:08.882 13:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74574' 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74574 00:34:08.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74574 ']' 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.883 13:54:16 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:08.883 [2024-11-20 13:54:16.461968] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:08.883 [2024-11-20 13:54:16.462180] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74574 ] 00:34:09.143 [2024-11-20 13:54:16.640923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:09.143 [2024-11-20 13:54:16.768933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:09.143 [2024-11-20 13:54:16.769123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.143 [2024-11-20 13:54:16.769172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:09.713 13:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.713 13:54:17 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:34:09.713 13:54:17 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:34:09.973 I/O targets: 00:34:09.973 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:09.973 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:09.973 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:34:09.973 nvme1n1: 262144 blocks of 4096 bytes (1024 MiB) 00:34:09.973 nvme2n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:34:09.973 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:34:09.973 00:34:09.973 00:34:09.973 CUnit - A unit testing framework for C - Version 2.1-3 00:34:09.973 http://cunit.sourceforge.net/ 00:34:09.973 00:34:09.973 00:34:09.973 Suite: bdevio tests on: nvme3n1 00:34:09.973 Test: blockdev write read block ...passed 00:34:09.973 Test: blockdev write zeroes read block ...passed 00:34:09.973 Test: blockdev write zeroes read no split ...passed 00:34:09.973 Test: blockdev write zeroes read split ...passed 00:34:09.973 Test: blockdev write zeroes read split partial ...passed 00:34:09.973 Test: blockdev reset ...passed 00:34:09.973 Test: blockdev write read 8 blocks ...passed 00:34:09.973 Test: blockdev write read size > 128k ...passed 00:34:09.973 Test: blockdev write read invalid size ...passed 00:34:09.973 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:09.973 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:09.973 Test: blockdev write read max offset ...passed 00:34:09.973 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:09.973 Test: blockdev writev readv 8 blocks ...passed 00:34:09.973 Test: blockdev writev readv 30 x 1block ...passed 00:34:09.973 Test: blockdev writev readv block ...passed 00:34:09.973 Test: blockdev writev readv size > 128k ...passed 00:34:09.973 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:09.973 Test: blockdev comparev and writev ...passed 00:34:09.973 Test: blockdev nvme passthru rw ...passed 00:34:09.973 Test: blockdev nvme passthru vendor specific ...passed 00:34:09.973 Test: blockdev nvme admin passthru ...passed 00:34:09.973 Test: blockdev copy ...passed 00:34:09.973 Suite: bdevio tests on: nvme2n1 00:34:09.973 Test: blockdev write read block ...passed 00:34:09.973 Test: blockdev write zeroes read block ...passed 00:34:09.973 Test: blockdev write zeroes read no split ...passed 00:34:09.974 Test: blockdev write zeroes read split ...passed 00:34:09.974 Test: blockdev write zeroes read split partial ...passed 00:34:09.974 Test: blockdev reset ...passed 00:34:09.974 Test: blockdev write read 8 blocks ...passed 00:34:09.974 Test: blockdev write read size > 128k ...passed 00:34:09.974 Test: blockdev write read invalid size ...passed 00:34:09.974 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:09.974 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:09.974 Test: blockdev write read max offset ...passed 00:34:09.974 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:09.974 Test: blockdev writev readv 8 blocks ...passed 00:34:09.974 Test: blockdev writev readv 30 x 1block ...passed 00:34:09.974 Test: blockdev writev readv block ...passed 00:34:09.974 Test: blockdev writev readv size > 128k ...passed 00:34:09.974 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:09.974 Test: blockdev comparev and writev ...passed 00:34:09.974 Test: blockdev nvme passthru rw ...passed 00:34:09.974 Test: blockdev nvme passthru vendor specific ...passed 00:34:09.974 Test: blockdev nvme admin passthru ...passed 00:34:09.974 Test: blockdev copy ...passed 00:34:09.974 Suite: bdevio tests on: nvme1n1 00:34:09.974 Test: blockdev write read block ...passed 00:34:10.233 Test: blockdev write zeroes read block ...passed 00:34:10.233 Test: blockdev write zeroes read no split ...passed 00:34:10.233 Test: blockdev write zeroes read split ...passed 00:34:10.233 Test: blockdev write zeroes read split partial ...passed 00:34:10.233 Test: blockdev reset ...passed 00:34:10.233 Test: blockdev write read 8 blocks ...passed 00:34:10.233 Test: blockdev write read size > 128k ...passed 00:34:10.233 Test: blockdev write read invalid size ...passed 00:34:10.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.233 Test: blockdev write read max offset ...passed 00:34:10.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.233 Test: blockdev writev readv 8 blocks ...passed 00:34:10.233 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.233 Test: blockdev writev readv block ...passed 00:34:10.233 Test: blockdev writev readv size > 128k ...passed 00:34:10.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.233 Test: blockdev comparev and writev ...passed 00:34:10.233 Test: blockdev nvme passthru rw ...passed 00:34:10.233 Test: blockdev nvme passthru vendor specific ...passed 00:34:10.233 Test: blockdev nvme admin passthru ...passed 00:34:10.233 Test: blockdev copy ...passed 00:34:10.233 Suite: bdevio tests on: nvme0n3 00:34:10.233 Test: blockdev write read block ...passed 00:34:10.233 Test: blockdev write zeroes read block ...passed 00:34:10.233 Test: blockdev write zeroes read no split ...passed 00:34:10.233 Test: blockdev write zeroes read split ...passed 00:34:10.233 Test: blockdev write zeroes read split partial ...passed 00:34:10.233 Test: blockdev reset ...passed 00:34:10.233 Test: blockdev write read 8 blocks ...passed 00:34:10.233 Test: blockdev write read size > 128k ...passed 00:34:10.233 Test: blockdev write read invalid size ...passed 00:34:10.233 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.233 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.233 Test: blockdev write read max offset ...passed 00:34:10.233 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.233 Test: blockdev writev readv 8 blocks ...passed 00:34:10.233 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.233 Test: blockdev writev readv block ...passed 00:34:10.233 Test: blockdev writev readv size > 128k ...passed 00:34:10.233 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.233 Test: blockdev comparev and writev ...passed 00:34:10.233 Test: blockdev nvme passthru rw ...passed 00:34:10.233 Test: blockdev nvme passthru vendor specific ...passed 00:34:10.233 Test: blockdev nvme admin passthru ...passed 00:34:10.233 Test: blockdev copy ...passed 00:34:10.233 Suite: bdevio tests on: nvme0n2 00:34:10.233 Test: blockdev write read block ...passed 00:34:10.233 Test: blockdev write zeroes read block ...passed 00:34:10.233 Test: blockdev write zeroes read no split ...passed 00:34:10.233 Test: blockdev write zeroes read split ...passed 00:34:10.233 Test: blockdev write zeroes read split partial ...passed 00:34:10.233 Test: blockdev reset ...passed 00:34:10.234 Test: blockdev write read 8 blocks ...passed 00:34:10.234 Test: blockdev write read size > 128k ...passed 00:34:10.234 Test: blockdev write read invalid size ...passed 00:34:10.234 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.234 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.234 Test: blockdev write read max offset ...passed 00:34:10.234 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.234 Test: blockdev writev readv 8 blocks ...passed 00:34:10.234 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.234 Test: blockdev writev readv block ...passed 00:34:10.234 Test: blockdev writev readv size > 128k ...passed 00:34:10.234 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.234 Test: blockdev comparev and writev ...passed 00:34:10.234 Test: blockdev nvme passthru rw ...passed 00:34:10.234 Test: blockdev nvme passthru vendor specific ...passed 00:34:10.234 Test: blockdev nvme admin passthru ...passed 00:34:10.234 Test: blockdev copy ...passed 00:34:10.234 Suite: bdevio tests on: nvme0n1 00:34:10.234 Test: blockdev write read block ...passed 00:34:10.234 Test: blockdev write zeroes read block ...passed 00:34:10.234 Test: blockdev write zeroes read no split ...passed 00:34:10.498 Test: blockdev write zeroes read split ...passed 00:34:10.498 Test: blockdev write zeroes read split partial ...passed 00:34:10.498 Test: blockdev reset ...passed 00:34:10.498 Test: blockdev write read 8 blocks ...passed 00:34:10.498 Test: blockdev write read size > 128k ...passed 00:34:10.498 Test: blockdev write read invalid size ...passed 00:34:10.498 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:34:10.498 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:34:10.498 Test: blockdev write read max offset ...passed 00:34:10.498 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:34:10.498 Test: blockdev writev readv 8 blocks ...passed 00:34:10.498 Test: blockdev writev readv 30 x 1block ...passed 00:34:10.498 Test: blockdev writev readv block ...passed 00:34:10.498 Test: blockdev writev readv size > 128k ...passed 00:34:10.498 Test: blockdev writev readv size > 128k in two iovs ...passed 00:34:10.498 Test: blockdev comparev and writev ...passed 00:34:10.498 Test: blockdev nvme passthru rw ...passed 00:34:10.498 Test: blockdev nvme passthru vendor specific ...passed 00:34:10.498 Test: blockdev nvme admin passthru ...passed 00:34:10.498 Test: blockdev copy ...passed 00:34:10.498 00:34:10.498 Run Summary: Type Total Ran Passed Failed Inactive 00:34:10.498 suites 6 6 n/a 0 0 00:34:10.498 tests 138 138 138 0 0 00:34:10.498 asserts 780 780 780 0 n/a 00:34:10.498 00:34:10.498 Elapsed time = 1.553 seconds 00:34:10.498 0 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74574 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74574 ']' 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74574 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74574 00:34:10.498 killing process with pid 74574 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74574' 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74574 00:34:10.498 13:54:18 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74574 00:34:11.884 ************************************ 00:34:11.884 END TEST bdev_bounds 00:34:11.884 ************************************ 00:34:11.884 13:54:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:34:11.884 00:34:11.884 real 0m2.945s 00:34:11.884 user 0m7.465s 00:34:11.884 sys 0m0.406s 00:34:11.884 13:54:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.884 13:54:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 13:54:19 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:34:11.884 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:34:11.884 13:54:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:11.884 13:54:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 ************************************ 00:34:11.884 START TEST bdev_nbd 00:34:11.884 ************************************ 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74639 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74639 /var/tmp/spdk-nbd.sock 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74639 ']' 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:34:11.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.884 13:54:19 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:11.884 [2024-11-20 13:54:19.477785] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:11.884 [2024-11-20 13:54:19.478002] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.144 [2024-11-20 13:54:19.637542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.144 [2024-11-20 13:54:19.761097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.714 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:12.715 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:12.974 1+0 records in 00:34:12.974 1+0 records out 00:34:12.974 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000726101 s, 5.6 MB/s 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:12.974 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.235 1+0 records in 00:34:13.235 1+0 records out 00:34:13.235 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000713125 s, 5.7 MB/s 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:13.235 13:54:20 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.495 1+0 records in 00:34:13.495 1+0 records out 00:34:13.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000813818 s, 5.0 MB/s 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:13.495 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:13.756 1+0 records in 00:34:13.756 1+0 records out 00:34:13.756 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000525678 s, 7.8 MB/s 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:13.756 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:14.016 1+0 records in 00:34:14.016 1+0 records out 00:34:14.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135746 s, 3.0 MB/s 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:14.016 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:14.276 1+0 records in 00:34:14.276 1+0 records out 00:34:14.276 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000629585 s, 6.5 MB/s 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:34:14.276 13:54:21 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd0", 00:34:14.581 "bdev_name": "nvme0n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd1", 00:34:14.581 "bdev_name": "nvme0n2" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd2", 00:34:14.581 "bdev_name": "nvme0n3" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd3", 00:34:14.581 "bdev_name": "nvme1n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd4", 00:34:14.581 "bdev_name": "nvme2n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd5", 00:34:14.581 "bdev_name": "nvme3n1" 00:34:14.581 } 00:34:14.581 ]' 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd0", 00:34:14.581 "bdev_name": "nvme0n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd1", 00:34:14.581 "bdev_name": "nvme0n2" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd2", 00:34:14.581 "bdev_name": "nvme0n3" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd3", 00:34:14.581 "bdev_name": "nvme1n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd4", 00:34:14.581 "bdev_name": "nvme2n1" 00:34:14.581 }, 00:34:14.581 { 00:34:14.581 "nbd_device": "/dev/nbd5", 00:34:14.581 "bdev_name": "nvme3n1" 00:34:14.581 } 00:34:14.581 ]' 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.581 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:14.841 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:15.100 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:15.360 13:54:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:15.619 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:15.877 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:16.135 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:16.394 13:54:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:34:16.653 /dev/nbd0 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:16.653 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:16.653 1+0 records in 00:34:16.654 1+0 records out 00:34:16.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000573131 s, 7.1 MB/s 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:16.654 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:34:16.911 /dev/nbd1 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:16.911 1+0 records in 00:34:16.911 1+0 records out 00:34:16.911 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00057317 s, 7.1 MB/s 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:16.911 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:34:17.169 /dev/nbd10 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:17.169 1+0 records in 00:34:17.169 1+0 records out 00:34:17.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000803678 s, 5.1 MB/s 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.169 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:17.170 13:54:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:34:17.428 /dev/nbd11 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:17.428 1+0 records in 00:34:17.428 1+0 records out 00:34:17.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000747479 s, 5.5 MB/s 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:17.428 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:34:17.686 /dev/nbd12 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:17.686 1+0 records in 00:34:17.686 1+0 records out 00:34:17.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000740756 s, 5.5 MB/s 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:17.686 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.687 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:17.687 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:17.687 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:17.687 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:17.687 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:34:17.945 /dev/nbd13 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:34:17.945 1+0 records in 00:34:17.945 1+0 records out 00:34:17.945 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594404 s, 6.9 MB/s 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:17.945 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd0", 00:34:18.204 "bdev_name": "nvme0n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd1", 00:34:18.204 "bdev_name": "nvme0n2" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd10", 00:34:18.204 "bdev_name": "nvme0n3" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd11", 00:34:18.204 "bdev_name": "nvme1n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd12", 00:34:18.204 "bdev_name": "nvme2n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd13", 00:34:18.204 "bdev_name": "nvme3n1" 00:34:18.204 } 00:34:18.204 ]' 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd0", 00:34:18.204 "bdev_name": "nvme0n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd1", 00:34:18.204 "bdev_name": "nvme0n2" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd10", 00:34:18.204 "bdev_name": "nvme0n3" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd11", 00:34:18.204 "bdev_name": "nvme1n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd12", 00:34:18.204 "bdev_name": "nvme2n1" 00:34:18.204 }, 00:34:18.204 { 00:34:18.204 "nbd_device": "/dev/nbd13", 00:34:18.204 "bdev_name": "nvme3n1" 00:34:18.204 } 00:34:18.204 ]' 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:34:18.204 /dev/nbd1 00:34:18.204 /dev/nbd10 00:34:18.204 /dev/nbd11 00:34:18.204 /dev/nbd12 00:34:18.204 /dev/nbd13' 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:34:18.204 /dev/nbd1 00:34:18.204 /dev/nbd10 00:34:18.204 /dev/nbd11 00:34:18.204 /dev/nbd12 00:34:18.204 /dev/nbd13' 00:34:18.204 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:34:18.463 256+0 records in 00:34:18.463 256+0 records out 00:34:18.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142009 s, 73.8 MB/s 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.463 13:54:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:34:18.463 256+0 records in 00:34:18.463 256+0 records out 00:34:18.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0993094 s, 10.6 MB/s 00:34:18.463 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.463 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:34:18.463 256+0 records in 00:34:18.463 256+0 records out 00:34:18.463 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104133 s, 10.1 MB/s 00:34:18.463 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.463 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:34:18.721 256+0 records in 00:34:18.721 256+0 records out 00:34:18.721 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.100866 s, 10.4 MB/s 00:34:18.722 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.722 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:34:18.722 256+0 records in 00:34:18.722 256+0 records out 00:34:18.722 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0921644 s, 11.4 MB/s 00:34:18.722 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.722 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:34:18.980 256+0 records in 00:34:18.980 256+0 records out 00:34:18.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116184 s, 9.0 MB/s 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:34:18.980 256+0 records in 00:34:18.980 256+0 records out 00:34:18.980 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.092827 s, 11.3 MB/s 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:18.980 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:19.239 13:54:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:19.497 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:19.756 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:20.014 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:20.272 13:54:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:20.531 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:34:20.793 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:34:21.056 malloc_lvol_verify 00:34:21.056 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:34:21.313 50a0ff63-a893-4fd0-a613-32cacd5263d0 00:34:21.313 13:54:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:34:21.571 0c5b0507-cc40-4035-9e60-73a99c2122fe 00:34:21.571 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:34:21.829 /dev/nbd0 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:34:21.829 mke2fs 1.47.0 (5-Feb-2023) 00:34:21.829 Discarding device blocks: 0/4096 done 00:34:21.829 Creating filesystem with 4096 1k blocks and 1024 inodes 00:34:21.829 00:34:21.829 Allocating group tables: 0/1 done 00:34:21.829 Writing inode tables: 0/1 done 00:34:21.829 Creating journal (1024 blocks): done 00:34:21.829 Writing superblocks and filesystem accounting information: 0/1 done 00:34:21.829 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:34:21.829 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74639 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74639 ']' 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74639 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74639 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74639' 00:34:22.088 killing process with pid 74639 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74639 00:34:22.088 13:54:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74639 00:34:23.495 13:54:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:34:23.495 00:34:23.495 real 0m11.664s 00:34:23.495 user 0m15.806s 00:34:23.495 sys 0m4.403s 00:34:23.495 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.495 13:54:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 ************************************ 00:34:23.495 END TEST bdev_nbd 00:34:23.495 ************************************ 00:34:23.495 13:54:31 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:34:23.495 13:54:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:34:23.495 13:54:31 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:34:23.495 13:54:31 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:34:23.495 13:54:31 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:23.495 13:54:31 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.495 13:54:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 ************************************ 00:34:23.495 START TEST bdev_fio 00:34:23.495 ************************************ 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:34:23.495 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:23.495 ************************************ 00:34:23.495 START TEST bdev_fio_rw_verify 00:34:23.495 ************************************ 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:23.495 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:34:23.754 13:54:31 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:34:23.754 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.754 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.754 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.754 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.755 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.755 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:34:23.755 fio-3.35 00:34:23.755 Starting 6 threads 00:34:35.961 00:34:35.961 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=75056: Wed Nov 20 13:54:42 2024 00:34:35.961 read: IOPS=32.8k, BW=128MiB/s (134MB/s)(1279MiB/10001msec) 00:34:35.961 slat (usec): min=2, max=1272, avg= 8.11, stdev= 6.09 00:34:35.961 clat (usec): min=71, max=4658, avg=449.67, stdev=237.39 00:34:35.961 lat (usec): min=78, max=4673, avg=457.79, stdev=238.67 00:34:35.961 clat percentiles (usec): 00:34:35.961 | 50.000th=[ 408], 99.000th=[ 1172], 99.900th=[ 1778], 99.990th=[ 3884], 00:34:35.961 | 99.999th=[ 4621] 00:34:35.961 write: IOPS=33.2k, BW=130MiB/s (136MB/s)(1297MiB/10001msec); 0 zone resets 00:34:35.961 slat (usec): min=12, max=1768, avg=38.98, stdev=46.35 00:34:35.961 clat (usec): min=62, max=4761, avg=636.19, stdev=298.37 00:34:35.961 lat (usec): min=77, max=4791, avg=675.18, stdev=308.72 00:34:35.961 clat percentiles (usec): 00:34:35.961 | 50.000th=[ 603], 99.000th=[ 1500], 99.900th=[ 2114], 99.990th=[ 2966], 00:34:35.961 | 99.999th=[ 4424] 00:34:35.961 bw ( KiB/s): min=108417, max=156638, per=99.95%, avg=132710.84, stdev=2326.43, samples=114 00:34:35.961 iops : min=27103, max=39159, avg=33177.05, stdev=581.60, samples=114 00:34:35.961 lat (usec) : 100=0.01%, 250=13.06%, 500=37.54%, 750=28.76%, 1000=13.90% 00:34:35.961 lat (msec) : 2=6.62%, 4=0.11%, 10=0.01% 00:34:35.961 cpu : usr=48.31%, sys=32.16%, ctx=8918, majf=0, minf=27343 00:34:35.961 IO depths : 1=11.5%, 2=23.7%, 4=51.2%, 8=13.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:35.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.961 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:35.961 issued rwts: total=327537,331970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:35.961 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:35.961 00:34:35.961 Run status group 0 (all jobs): 00:34:35.961 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=1279MiB (1342MB), run=10001-10001msec 00:34:35.961 WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=1297MiB (1360MB), run=10001-10001msec 00:34:36.221 ----------------------------------------------------- 00:34:36.221 Suppressions used: 00:34:36.221 count bytes template 00:34:36.221 6 48 /usr/src/fio/parse.c 00:34:36.221 4213 404448 /usr/src/fio/iolog.c 00:34:36.221 1 8 libtcmalloc_minimal.so 00:34:36.221 1 904 libcrypto.so 00:34:36.221 ----------------------------------------------------- 00:34:36.221 00:34:36.221 00:34:36.221 real 0m12.599s 00:34:36.221 user 0m31.117s 00:34:36.221 sys 0m19.680s 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.221 ************************************ 00:34:36.221 END TEST bdev_fio_rw_verify 00:34:36.221 ************************************ 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:34:36.221 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "848ec16e-57b6-447c-bec5-91744203f087"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "848ec16e-57b6-447c-bec5-91744203f087",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "de0ea668-808e-4dd7-b502-4758dfd2ac50"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "de0ea668-808e-4dd7-b502-4758dfd2ac50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "1152afff-995c-4af8-9aea-8c5783fd1687"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1152afff-995c-4af8-9aea-8c5783fd1687",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "b04da2e5-91a8-43a7-b457-ff7b4cf37a44"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "b04da2e5-91a8-43a7-b457-ff7b4cf37a44",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "9080c7a5-2d73-4ee2-947c-5c592b7c6508"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "9080c7a5-2d73-4ee2-947c-5c592b7c6508",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "30182b15-c8b0-4773-927e-a6aae1a4fdb1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "30182b15-c8b0-4773-927e-a6aae1a4fdb1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:34:36.222 /home/vagrant/spdk_repo/spdk 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:34:36.222 00:34:36.222 real 0m12.827s 00:34:36.222 user 0m31.247s 00:34:36.222 sys 0m19.788s 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.222 13:54:43 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:34:36.222 ************************************ 00:34:36.222 END TEST bdev_fio 00:34:36.222 ************************************ 00:34:36.481 13:54:43 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:34:36.481 13:54:43 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:36.481 13:54:43 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:36.481 13:54:43 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.481 13:54:43 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:36.481 ************************************ 00:34:36.481 START TEST bdev_verify 00:34:36.481 ************************************ 00:34:36.481 13:54:43 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:34:36.481 [2024-11-20 13:54:44.083034] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:36.481 [2024-11-20 13:54:44.083155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75226 ] 00:34:36.742 [2024-11-20 13:54:44.256900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:36.742 [2024-11-20 13:54:44.371571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:36.742 [2024-11-20 13:54:44.371602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:37.311 Running I/O for 5 seconds... 00:34:39.626 24512.00 IOPS, 95.75 MiB/s [2024-11-20T13:54:48.283Z] 25248.00 IOPS, 98.62 MiB/s [2024-11-20T13:54:49.220Z] 24853.33 IOPS, 97.08 MiB/s [2024-11-20T13:54:50.157Z] 24336.00 IOPS, 95.06 MiB/s [2024-11-20T13:54:50.157Z] 24236.80 IOPS, 94.68 MiB/s 00:34:42.438 Latency(us) 00:34:42.438 [2024-11-20T13:54:50.157Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:42.438 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0x80000 00:34:42.438 nvme0n1 : 5.05 1875.40 7.33 0.00 0.00 68134.91 10703.26 61815.62 00:34:42.438 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x80000 length 0x80000 00:34:42.438 nvme0n1 : 5.03 1856.13 7.25 0.00 0.00 68843.12 10130.89 70515.59 00:34:42.438 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0x80000 00:34:42.438 nvme0n2 : 5.07 1869.10 7.30 0.00 0.00 68254.36 10188.13 67768.23 00:34:42.438 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x80000 length 0x80000 00:34:42.438 nvme0n2 : 5.05 1850.77 7.23 0.00 0.00 68927.96 12878.25 60899.83 00:34:42.438 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0x80000 00:34:42.438 nvme0n3 : 5.04 1853.22 7.24 0.00 0.00 68721.45 10932.21 68684.02 00:34:42.438 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x80000 length 0x80000 00:34:42.438 nvme0n3 : 5.05 1850.02 7.23 0.00 0.00 68840.47 11218.39 69599.80 00:34:42.438 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0x20000 00:34:42.438 nvme1n1 : 5.05 1848.52 7.22 0.00 0.00 68776.58 11790.76 63647.19 00:34:42.438 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x20000 length 0x20000 00:34:42.438 nvme1n1 : 5.04 1852.81 7.24 0.00 0.00 68615.15 12420.36 65478.76 00:34:42.438 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0xbd0bd 00:34:42.438 nvme2n1 : 5.07 2711.76 10.59 0.00 0.00 46742.52 5551.96 65020.87 00:34:42.438 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:34:42.438 nvme2n1 : 5.06 2660.99 10.39 0.00 0.00 47678.98 5265.77 56091.95 00:34:42.438 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0x0 length 0xa0000 00:34:42.438 nvme3n1 : 5.06 1870.85 7.31 0.00 0.00 67759.64 8013.14 63647.19 00:34:42.438 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:34:42.438 Verification LBA range: start 0xa0000 length 0xa0000 00:34:42.438 nvme3n1 : 5.07 1868.28 7.30 0.00 0.00 67703.95 2976.31 70057.70 00:34:42.438 [2024-11-20T13:54:50.157Z] =================================================================================================================== 00:34:42.438 [2024-11-20T13:54:50.157Z] Total : 23967.85 93.62 0.00 0.00 63680.01 2976.31 70515.59 00:34:43.818 00:34:43.818 real 0m7.133s 00:34:43.818 user 0m11.279s 00:34:43.818 sys 0m1.841s 00:34:43.818 13:54:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.818 13:54:51 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:34:43.818 ************************************ 00:34:43.818 END TEST bdev_verify 00:34:43.818 ************************************ 00:34:43.818 13:54:51 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:43.818 13:54:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:34:43.818 13:54:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.818 13:54:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:43.818 ************************************ 00:34:43.818 START TEST bdev_verify_big_io 00:34:43.818 ************************************ 00:34:43.818 13:54:51 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:34:43.818 [2024-11-20 13:54:51.280594] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:43.818 [2024-11-20 13:54:51.280709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75328 ] 00:34:43.818 [2024-11-20 13:54:51.458525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:44.078 [2024-11-20 13:54:51.582395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.078 [2024-11-20 13:54:51.582431] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.646 Running I/O for 5 seconds... 00:34:50.472 2024.00 IOPS, 126.50 MiB/s [2024-11-20T13:54:58.191Z] 3197.00 IOPS, 199.81 MiB/s [2024-11-20T13:54:58.191Z] 3486.33 IOPS, 217.90 MiB/s 00:34:50.472 Latency(us) 00:34:50.472 [2024-11-20T13:54:58.191Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:50.472 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0x8000 00:34:50.472 nvme0n1 : 5.59 145.85 9.12 0.00 0.00 843067.42 64105.08 1311406.84 00:34:50.472 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x8000 length 0x8000 00:34:50.472 nvme0n1 : 5.70 168.38 10.52 0.00 0.00 719055.69 5237.16 1267449.07 00:34:50.472 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0x8000 00:34:50.472 nvme0n2 : 5.60 137.18 8.57 0.00 0.00 860881.42 107604.96 930439.49 00:34:50.472 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x8000 length 0x8000 00:34:50.472 nvme0n2 : 5.70 126.23 7.89 0.00 0.00 961276.07 91120.80 2197888.56 00:34:50.472 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0x8000 00:34:50.472 nvme0n3 : 5.68 149.42 9.34 0.00 0.00 795389.00 74636.63 1172207.23 00:34:50.472 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x8000 length 0x8000 00:34:50.472 nvme0n3 : 5.77 135.82 8.49 0.00 0.00 874364.99 19803.89 2285804.10 00:34:50.472 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0x2000 00:34:50.472 nvme1n1 : 5.68 164.74 10.30 0.00 0.00 701812.70 78757.67 717976.93 00:34:50.472 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x2000 length 0x2000 00:34:50.472 nvme1n1 : 5.71 129.32 8.08 0.00 0.00 890505.35 119968.08 2344414.46 00:34:50.472 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0xbd0b 00:34:50.472 nvme2n1 : 5.71 179.37 11.21 0.00 0.00 629011.00 29305.18 699661.19 00:34:50.472 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0xbd0b length 0xbd0b 00:34:50.472 nvme2n1 : 5.77 174.55 10.91 0.00 0.00 643578.39 19574.94 1142902.05 00:34:50.472 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0x0 length 0xa000 00:34:50.472 nvme3n1 : 5.78 165.99 10.37 0.00 0.00 660858.03 425.70 1062312.80 00:34:50.472 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:34:50.472 Verification LBA range: start 0xa000 length 0xa000 00:34:50.472 nvme3n1 : 5.83 200.45 12.53 0.00 0.00 546091.73 958.71 728966.37 00:34:50.472 [2024-11-20T13:54:58.191Z] =================================================================================================================== 00:34:50.472 [2024-11-20T13:54:58.191Z] Total : 1877.29 117.33 0.00 0.00 742841.28 425.70 2344414.46 00:34:51.854 00:34:51.854 real 0m8.274s 00:34:51.854 user 0m15.080s 00:34:51.854 sys 0m0.529s 00:34:51.854 13:54:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:51.854 13:54:59 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:34:51.854 ************************************ 00:34:51.854 END TEST bdev_verify_big_io 00:34:51.854 ************************************ 00:34:51.854 13:54:59 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:51.854 13:54:59 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:51.854 13:54:59 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:51.854 13:54:59 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:51.854 ************************************ 00:34:51.854 START TEST bdev_write_zeroes 00:34:51.854 ************************************ 00:34:51.854 13:54:59 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:52.113 [2024-11-20 13:54:59.614744] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:52.113 [2024-11-20 13:54:59.614861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75438 ] 00:34:52.113 [2024-11-20 13:54:59.789798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.373 [2024-11-20 13:54:59.914092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.941 Running I/O for 1 seconds... 00:34:53.878 63840.00 IOPS, 249.38 MiB/s 00:34:53.878 Latency(us) 00:34:53.878 [2024-11-20T13:55:01.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:53.878 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme0n1 : 1.02 10420.53 40.71 0.00 0.00 12271.47 7383.53 27931.50 00:34:53.878 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme0n2 : 1.02 10407.32 40.65 0.00 0.00 12278.67 7669.72 28045.97 00:34:53.878 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme0n3 : 1.02 10394.67 40.60 0.00 0.00 12285.33 7669.72 29305.18 00:34:53.878 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme1n1 : 1.02 10385.38 40.57 0.00 0.00 12289.58 7669.72 29763.07 00:34:53.878 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme2n1 : 1.03 11287.01 44.09 0.00 0.00 11298.74 4693.41 24153.88 00:34:53.878 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:34:53.878 nvme3n1 : 1.03 10438.01 40.77 0.00 0.00 12145.35 3162.33 29076.23 00:34:53.878 [2024-11-20T13:55:01.597Z] =================================================================================================================== 00:34:53.878 [2024-11-20T13:55:01.597Z] Total : 63332.92 247.39 0.00 0.00 12082.78 3162.33 29763.07 00:34:55.259 00:34:55.259 real 0m3.050s 00:34:55.259 user 0m2.358s 00:34:55.259 sys 0m0.530s 00:34:55.259 13:55:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.259 13:55:02 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:34:55.259 ************************************ 00:34:55.259 END TEST bdev_write_zeroes 00:34:55.259 ************************************ 00:34:55.259 13:55:02 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.259 13:55:02 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:55.259 13:55:02 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.259 13:55:02 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:55.259 ************************************ 00:34:55.259 START TEST bdev_json_nonenclosed 00:34:55.259 ************************************ 00:34:55.259 13:55:02 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.259 [2024-11-20 13:55:02.740412] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:55.259 [2024-11-20 13:55:02.740524] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75497 ] 00:34:55.259 [2024-11-20 13:55:02.917610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.542 [2024-11-20 13:55:03.035264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.542 [2024-11-20 13:55:03.035364] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:34:55.542 [2024-11-20 13:55:03.035382] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:55.542 [2024-11-20 13:55:03.035391] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:55.802 00:34:55.802 real 0m0.645s 00:34:55.802 user 0m0.408s 00:34:55.802 sys 0m0.132s 00:34:55.802 13:55:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:55.802 13:55:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:34:55.802 ************************************ 00:34:55.802 END TEST bdev_json_nonenclosed 00:34:55.802 ************************************ 00:34:55.802 13:55:03 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.802 13:55:03 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:34:55.802 13:55:03 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:55.802 13:55:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:34:55.802 ************************************ 00:34:55.802 START TEST bdev_json_nonarray 00:34:55.802 ************************************ 00:34:55.802 13:55:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:34:55.802 [2024-11-20 13:55:03.446690] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:34:55.802 [2024-11-20 13:55:03.446826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75523 ] 00:34:56.061 [2024-11-20 13:55:03.623873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:56.061 [2024-11-20 13:55:03.737759] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:56.061 [2024-11-20 13:55:03.737872] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:34:56.061 [2024-11-20 13:55:03.737892] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:34:56.061 [2024-11-20 13:55:03.737902] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:34:56.321 00:34:56.321 real 0m0.632s 00:34:56.321 user 0m0.395s 00:34:56.321 sys 0m0.132s 00:34:56.321 13:55:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:56.321 13:55:03 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:34:56.321 ************************************ 00:34:56.321 END TEST bdev_json_nonarray 00:34:56.321 ************************************ 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:34:56.581 13:55:04 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:34:57.149 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:15.237 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:35:23.353 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:35:23.353 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:35:23.353 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:35:23.611 00:35:23.611 real 1m22.400s 00:35:23.611 user 1m33.418s 00:35:23.611 sys 1m31.322s 00:35:23.612 13:55:31 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:23.612 13:55:31 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:35:23.612 ************************************ 00:35:23.612 END TEST blockdev_xnvme 00:35:23.612 ************************************ 00:35:23.612 13:55:31 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:35:23.612 13:55:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:23.612 13:55:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.612 13:55:31 -- common/autotest_common.sh@10 -- # set +x 00:35:23.612 ************************************ 00:35:23.612 START TEST ublk 00:35:23.612 ************************************ 00:35:23.612 13:55:31 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:35:23.612 * Looking for test storage... 00:35:23.612 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:35:23.612 13:55:31 ublk -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:23.612 13:55:31 ublk -- common/autotest_common.sh@1693 -- # lcov --version 00:35:23.612 13:55:31 ublk -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:23.869 13:55:31 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:23.869 13:55:31 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:35:23.869 13:55:31 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:35:23.869 13:55:31 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:35:23.869 13:55:31 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:23.869 13:55:31 ublk -- scripts/common.sh@344 -- # case "$op" in 00:35:23.869 13:55:31 ublk -- scripts/common.sh@345 -- # : 1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:23.869 13:55:31 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:23.869 13:55:31 ublk -- scripts/common.sh@365 -- # decimal 1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@353 -- # local d=1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:23.869 13:55:31 ublk -- scripts/common.sh@355 -- # echo 1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:35:23.869 13:55:31 ublk -- scripts/common.sh@366 -- # decimal 2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@353 -- # local d=2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:23.869 13:55:31 ublk -- scripts/common.sh@355 -- # echo 2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:35:23.869 13:55:31 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:23.869 13:55:31 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:23.869 13:55:31 ublk -- scripts/common.sh@368 -- # return 0 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:23.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.869 --rc genhtml_branch_coverage=1 00:35:23.869 --rc genhtml_function_coverage=1 00:35:23.869 --rc genhtml_legend=1 00:35:23.869 --rc geninfo_all_blocks=1 00:35:23.869 --rc geninfo_unexecuted_blocks=1 00:35:23.869 00:35:23.869 ' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:23.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.869 --rc genhtml_branch_coverage=1 00:35:23.869 --rc genhtml_function_coverage=1 00:35:23.869 --rc genhtml_legend=1 00:35:23.869 --rc geninfo_all_blocks=1 00:35:23.869 --rc geninfo_unexecuted_blocks=1 00:35:23.869 00:35:23.869 ' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:23.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.869 --rc genhtml_branch_coverage=1 00:35:23.869 --rc genhtml_function_coverage=1 00:35:23.869 --rc genhtml_legend=1 00:35:23.869 --rc geninfo_all_blocks=1 00:35:23.869 --rc geninfo_unexecuted_blocks=1 00:35:23.869 00:35:23.869 ' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:23.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:23.869 --rc genhtml_branch_coverage=1 00:35:23.869 --rc genhtml_function_coverage=1 00:35:23.869 --rc genhtml_legend=1 00:35:23.869 --rc geninfo_all_blocks=1 00:35:23.869 --rc geninfo_unexecuted_blocks=1 00:35:23.869 00:35:23.869 ' 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:35:23.869 13:55:31 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:35:23.869 13:55:31 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:35:23.869 13:55:31 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:35:23.869 13:55:31 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:35:23.869 13:55:31 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:35:23.869 13:55:31 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:35:23.869 13:55:31 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:35:23.869 13:55:31 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:35:23.869 13:55:31 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:23.869 13:55:31 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:23.869 ************************************ 00:35:23.869 START TEST test_save_ublk_config 00:35:23.869 ************************************ 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=76010 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 76010 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76010 ']' 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:23.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:23.869 13:55:31 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:23.869 [2024-11-20 13:55:31.513933] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:35:23.869 [2024-11-20 13:55:31.514050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76010 ] 00:35:24.127 [2024-11-20 13:55:31.691488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:24.127 [2024-11-20 13:55:31.811964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.061 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:25.061 [2024-11-20 13:55:32.756776] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:25.061 [2024-11-20 13:55:32.758070] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:25.320 malloc0 00:35:25.320 [2024-11-20 13:55:32.850897] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:35:25.320 [2024-11-20 13:55:32.851011] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:35:25.320 [2024-11-20 13:55:32.851024] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:25.320 [2024-11-20 13:55:32.851032] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:25.320 [2024-11-20 13:55:32.859102] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:25.320 [2024-11-20 13:55:32.859130] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:25.320 [2024-11-20 13:55:32.864740] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:25.320 [2024-11-20 13:55:32.864861] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:25.320 [2024-11-20 13:55:32.882762] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:25.320 0 00:35:25.320 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.320 13:55:32 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:35:25.320 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.321 13:55:32 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:25.585 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.585 13:55:33 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:35:25.585 "subsystems": [ 00:35:25.585 { 00:35:25.585 "subsystem": "fsdev", 00:35:25.585 "config": [ 00:35:25.585 { 00:35:25.585 "method": "fsdev_set_opts", 00:35:25.585 "params": { 00:35:25.585 "fsdev_io_pool_size": 65535, 00:35:25.585 "fsdev_io_cache_size": 256 00:35:25.585 } 00:35:25.585 } 00:35:25.585 ] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "keyring", 00:35:25.585 "config": [] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "iobuf", 00:35:25.585 "config": [ 00:35:25.585 { 00:35:25.585 "method": "iobuf_set_options", 00:35:25.585 "params": { 00:35:25.585 "small_pool_count": 8192, 00:35:25.585 "large_pool_count": 1024, 00:35:25.585 "small_bufsize": 8192, 00:35:25.585 "large_bufsize": 135168, 00:35:25.585 "enable_numa": false 00:35:25.585 } 00:35:25.585 } 00:35:25.585 ] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "sock", 00:35:25.585 "config": [ 00:35:25.585 { 00:35:25.585 "method": "sock_set_default_impl", 00:35:25.585 "params": { 00:35:25.585 "impl_name": "posix" 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "sock_impl_set_options", 00:35:25.585 "params": { 00:35:25.585 "impl_name": "ssl", 00:35:25.585 "recv_buf_size": 4096, 00:35:25.585 "send_buf_size": 4096, 00:35:25.585 "enable_recv_pipe": true, 00:35:25.585 "enable_quickack": false, 00:35:25.585 "enable_placement_id": 0, 00:35:25.585 "enable_zerocopy_send_server": true, 00:35:25.585 "enable_zerocopy_send_client": false, 00:35:25.585 "zerocopy_threshold": 0, 00:35:25.585 "tls_version": 0, 00:35:25.585 "enable_ktls": false 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "sock_impl_set_options", 00:35:25.585 "params": { 00:35:25.585 "impl_name": "posix", 00:35:25.585 "recv_buf_size": 2097152, 00:35:25.585 "send_buf_size": 2097152, 00:35:25.585 "enable_recv_pipe": true, 00:35:25.585 "enable_quickack": false, 00:35:25.585 "enable_placement_id": 0, 00:35:25.585 "enable_zerocopy_send_server": true, 00:35:25.585 "enable_zerocopy_send_client": false, 00:35:25.585 "zerocopy_threshold": 0, 00:35:25.585 "tls_version": 0, 00:35:25.585 "enable_ktls": false 00:35:25.585 } 00:35:25.585 } 00:35:25.585 ] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "vmd", 00:35:25.585 "config": [] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "accel", 00:35:25.585 "config": [ 00:35:25.585 { 00:35:25.585 "method": "accel_set_options", 00:35:25.585 "params": { 00:35:25.585 "small_cache_size": 128, 00:35:25.585 "large_cache_size": 16, 00:35:25.585 "task_count": 2048, 00:35:25.585 "sequence_count": 2048, 00:35:25.585 "buf_count": 2048 00:35:25.585 } 00:35:25.585 } 00:35:25.585 ] 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "subsystem": "bdev", 00:35:25.585 "config": [ 00:35:25.585 { 00:35:25.585 "method": "bdev_set_options", 00:35:25.585 "params": { 00:35:25.585 "bdev_io_pool_size": 65535, 00:35:25.585 "bdev_io_cache_size": 256, 00:35:25.585 "bdev_auto_examine": true, 00:35:25.585 "iobuf_small_cache_size": 128, 00:35:25.585 "iobuf_large_cache_size": 16 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_raid_set_options", 00:35:25.585 "params": { 00:35:25.585 "process_window_size_kb": 1024, 00:35:25.585 "process_max_bandwidth_mb_sec": 0 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_iscsi_set_options", 00:35:25.585 "params": { 00:35:25.585 "timeout_sec": 30 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_nvme_set_options", 00:35:25.585 "params": { 00:35:25.585 "action_on_timeout": "none", 00:35:25.585 "timeout_us": 0, 00:35:25.585 "timeout_admin_us": 0, 00:35:25.585 "keep_alive_timeout_ms": 10000, 00:35:25.585 "arbitration_burst": 0, 00:35:25.585 "low_priority_weight": 0, 00:35:25.585 "medium_priority_weight": 0, 00:35:25.585 "high_priority_weight": 0, 00:35:25.585 "nvme_adminq_poll_period_us": 10000, 00:35:25.585 "nvme_ioq_poll_period_us": 0, 00:35:25.585 "io_queue_requests": 0, 00:35:25.585 "delay_cmd_submit": true, 00:35:25.585 "transport_retry_count": 4, 00:35:25.585 "bdev_retry_count": 3, 00:35:25.585 "transport_ack_timeout": 0, 00:35:25.585 "ctrlr_loss_timeout_sec": 0, 00:35:25.585 "reconnect_delay_sec": 0, 00:35:25.585 "fast_io_fail_timeout_sec": 0, 00:35:25.585 "disable_auto_failback": false, 00:35:25.585 "generate_uuids": false, 00:35:25.585 "transport_tos": 0, 00:35:25.585 "nvme_error_stat": false, 00:35:25.585 "rdma_srq_size": 0, 00:35:25.585 "io_path_stat": false, 00:35:25.585 "allow_accel_sequence": false, 00:35:25.585 "rdma_max_cq_size": 0, 00:35:25.585 "rdma_cm_event_timeout_ms": 0, 00:35:25.585 "dhchap_digests": [ 00:35:25.585 "sha256", 00:35:25.585 "sha384", 00:35:25.585 "sha512" 00:35:25.585 ], 00:35:25.585 "dhchap_dhgroups": [ 00:35:25.585 "null", 00:35:25.585 "ffdhe2048", 00:35:25.585 "ffdhe3072", 00:35:25.585 "ffdhe4096", 00:35:25.585 "ffdhe6144", 00:35:25.585 "ffdhe8192" 00:35:25.585 ] 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_nvme_set_hotplug", 00:35:25.585 "params": { 00:35:25.585 "period_us": 100000, 00:35:25.585 "enable": false 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_malloc_create", 00:35:25.585 "params": { 00:35:25.585 "name": "malloc0", 00:35:25.585 "num_blocks": 8192, 00:35:25.585 "block_size": 4096, 00:35:25.585 "physical_block_size": 4096, 00:35:25.585 "uuid": "286d9c27-330f-483d-897f-915bfe6da6e4", 00:35:25.585 "optimal_io_boundary": 0, 00:35:25.585 "md_size": 0, 00:35:25.585 "dif_type": 0, 00:35:25.585 "dif_is_head_of_md": false, 00:35:25.585 "dif_pi_format": 0 00:35:25.585 } 00:35:25.585 }, 00:35:25.585 { 00:35:25.585 "method": "bdev_wait_for_examine" 00:35:25.585 } 00:35:25.585 ] 00:35:25.585 }, 00:35:25.586 { 00:35:25.586 "subsystem": "scsi", 00:35:25.586 "config": null 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "scheduler", 00:35:25.586 "config": [ 00:35:25.586 { 00:35:25.586 "method": "framework_set_scheduler", 00:35:25.586 "params": { 00:35:25.586 "name": "static" 00:35:25.586 } 00:35:25.586 } 00:35:25.586 ] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "vhost_scsi", 00:35:25.586 "config": [] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "vhost_blk", 00:35:25.586 "config": [] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "ublk", 00:35:25.586 "config": [ 00:35:25.586 { 00:35:25.586 "method": "ublk_create_target", 00:35:25.586 "params": { 00:35:25.586 "cpumask": "1" 00:35:25.586 } 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "method": "ublk_start_disk", 00:35:25.586 "params": { 00:35:25.586 "bdev_name": "malloc0", 00:35:25.586 "ublk_id": 0, 00:35:25.586 "num_queues": 1, 00:35:25.586 "queue_depth": 128 00:35:25.586 } 00:35:25.586 } 00:35:25.586 ] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "nbd", 00:35:25.586 "config": [] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "nvmf", 00:35:25.586 "config": [ 00:35:25.586 { 00:35:25.586 "method": "nvmf_set_config", 00:35:25.586 "params": { 00:35:25.586 "discovery_filter": "match_any", 00:35:25.586 "admin_cmd_passthru": { 00:35:25.586 "identify_ctrlr": false 00:35:25.586 }, 00:35:25.586 "dhchap_digests": [ 00:35:25.586 "sha256", 00:35:25.586 "sha384", 00:35:25.586 "sha512" 00:35:25.586 ], 00:35:25.586 "dhchap_dhgroups": [ 00:35:25.586 "null", 00:35:25.586 "ffdhe2048", 00:35:25.586 "ffdhe3072", 00:35:25.586 "ffdhe4096", 00:35:25.586 "ffdhe6144", 00:35:25.586 "ffdhe8192" 00:35:25.586 ] 00:35:25.586 } 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "method": "nvmf_set_max_subsystems", 00:35:25.586 "params": { 00:35:25.586 "max_subsystems": 1024 00:35:25.586 } 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "method": "nvmf_set_crdt", 00:35:25.586 "params": { 00:35:25.586 "crdt1": 0, 00:35:25.586 "crdt2": 0, 00:35:25.586 "crdt3": 0 00:35:25.586 } 00:35:25.586 } 00:35:25.586 ] 00:35:25.586 }, 00:35:25.586 { 00:35:25.586 "subsystem": "iscsi", 00:35:25.586 "config": [ 00:35:25.586 { 00:35:25.586 "method": "iscsi_set_options", 00:35:25.586 "params": { 00:35:25.586 "node_base": "iqn.2016-06.io.spdk", 00:35:25.586 "max_sessions": 128, 00:35:25.586 "max_connections_per_session": 2, 00:35:25.586 "max_queue_depth": 64, 00:35:25.586 "default_time2wait": 2, 00:35:25.586 "default_time2retain": 20, 00:35:25.586 "first_burst_length": 8192, 00:35:25.586 "immediate_data": true, 00:35:25.586 "allow_duplicated_isid": false, 00:35:25.586 "error_recovery_level": 0, 00:35:25.586 "nop_timeout": 60, 00:35:25.586 "nop_in_interval": 30, 00:35:25.586 "disable_chap": false, 00:35:25.586 "require_chap": false, 00:35:25.586 "mutual_chap": false, 00:35:25.586 "chap_group": 0, 00:35:25.586 "max_large_datain_per_connection": 64, 00:35:25.586 "max_r2t_per_connection": 4, 00:35:25.586 "pdu_pool_size": 36864, 00:35:25.586 "immediate_data_pool_size": 16384, 00:35:25.586 "data_out_pool_size": 2048 00:35:25.586 } 00:35:25.586 } 00:35:25.586 ] 00:35:25.586 } 00:35:25.586 ] 00:35:25.586 }' 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 76010 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76010 ']' 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76010 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76010 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:25.586 killing process with pid 76010 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76010' 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76010 00:35:25.586 13:55:33 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76010 00:35:28.123 [2024-11-20 13:55:35.236484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:28.123 [2024-11-20 13:55:35.267826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:28.123 [2024-11-20 13:55:35.268021] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:28.123 [2024-11-20 13:55:35.277754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:28.123 [2024-11-20 13:55:35.277846] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:28.123 [2024-11-20 13:55:35.277862] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:28.123 [2024-11-20 13:55:35.277888] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:28.123 [2024-11-20 13:55:35.278091] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:30.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=76081 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 76081 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 76081 ']' 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:30.026 13:55:37 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:35:30.026 "subsystems": [ 00:35:30.026 { 00:35:30.026 "subsystem": "fsdev", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "fsdev_set_opts", 00:35:30.026 "params": { 00:35:30.026 "fsdev_io_pool_size": 65535, 00:35:30.026 "fsdev_io_cache_size": 256 00:35:30.026 } 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "keyring", 00:35:30.026 "config": [] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "iobuf", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "iobuf_set_options", 00:35:30.026 "params": { 00:35:30.026 "small_pool_count": 8192, 00:35:30.026 "large_pool_count": 1024, 00:35:30.026 "small_bufsize": 8192, 00:35:30.026 "large_bufsize": 135168, 00:35:30.026 "enable_numa": false 00:35:30.026 } 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "sock", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "sock_set_default_impl", 00:35:30.026 "params": { 00:35:30.026 "impl_name": "posix" 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "sock_impl_set_options", 00:35:30.026 "params": { 00:35:30.026 "impl_name": "ssl", 00:35:30.026 "recv_buf_size": 4096, 00:35:30.026 "send_buf_size": 4096, 00:35:30.026 "enable_recv_pipe": true, 00:35:30.026 "enable_quickack": false, 00:35:30.026 "enable_placement_id": 0, 00:35:30.026 "enable_zerocopy_send_server": true, 00:35:30.026 "enable_zerocopy_send_client": false, 00:35:30.026 "zerocopy_threshold": 0, 00:35:30.026 "tls_version": 0, 00:35:30.026 "enable_ktls": false 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "sock_impl_set_options", 00:35:30.026 "params": { 00:35:30.026 "impl_name": "posix", 00:35:30.026 "recv_buf_size": 2097152, 00:35:30.026 "send_buf_size": 2097152, 00:35:30.026 "enable_recv_pipe": true, 00:35:30.026 "enable_quickack": false, 00:35:30.026 "enable_placement_id": 0, 00:35:30.026 "enable_zerocopy_send_server": true, 00:35:30.026 "enable_zerocopy_send_client": false, 00:35:30.026 "zerocopy_threshold": 0, 00:35:30.026 "tls_version": 0, 00:35:30.026 "enable_ktls": false 00:35:30.026 } 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "vmd", 00:35:30.026 "config": [] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "accel", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "accel_set_options", 00:35:30.026 "params": { 00:35:30.026 "small_cache_size": 128, 00:35:30.026 "large_cache_size": 16, 00:35:30.026 "task_count": 2048, 00:35:30.026 "sequence_count": 2048, 00:35:30.026 "buf_count": 2048 00:35:30.026 } 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "bdev", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "bdev_set_options", 00:35:30.026 "params": { 00:35:30.026 "bdev_io_pool_size": 65535, 00:35:30.026 "bdev_io_cache_size": 256, 00:35:30.026 "bdev_auto_examine": true, 00:35:30.026 "iobuf_small_cache_size": 128, 00:35:30.026 "iobuf_large_cache_size": 16 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_raid_set_options", 00:35:30.026 "params": { 00:35:30.026 "process_window_size_kb": 1024, 00:35:30.026 "process_max_bandwidth_mb_sec": 0 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_iscsi_set_options", 00:35:30.026 "params": { 00:35:30.026 "timeout_sec": 30 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_nvme_set_options", 00:35:30.026 "params": { 00:35:30.026 "action_on_timeout": "none", 00:35:30.026 "timeout_us": 0, 00:35:30.026 "timeout_admin_us": 0, 00:35:30.026 "keep_alive_timeout_ms": 10000, 00:35:30.026 "arbitration_burst": 0, 00:35:30.026 "low_priority_weight": 0, 00:35:30.026 "medium_priority_weight": 0, 00:35:30.026 "high_priority_weight": 0, 00:35:30.026 "nvme_adminq_poll_period_us": 10000, 00:35:30.026 "nvme_ioq_poll_period_us": 0, 00:35:30.026 "io_queue_requests": 0, 00:35:30.026 "delay_cmd_submit": true, 00:35:30.026 "transport_retry_count": 4, 00:35:30.026 "bdev_retry_count": 3, 00:35:30.026 "transport_ack_timeout": 0, 00:35:30.026 "ctrlr_loss_timeout_sec": 0, 00:35:30.026 "reconnect_delay_sec": 0, 00:35:30.026 "fast_io_fail_timeout_sec": 0, 00:35:30.026 "disable_auto_failback": false, 00:35:30.026 "generate_uuids": false, 00:35:30.026 "transport_tos": 0, 00:35:30.026 "nvme_error_stat": false, 00:35:30.026 "rdma_srq_size": 0, 00:35:30.026 "io_path_stat": false, 00:35:30.026 "allow_accel_sequence": false, 00:35:30.026 "rdma_max_cq_size": 0, 00:35:30.026 "rdma_cm_event_timeout_ms": 0, 00:35:30.026 "dhchap_digests": [ 00:35:30.026 "sha256", 00:35:30.026 "sha384", 00:35:30.026 "sha512" 00:35:30.026 ], 00:35:30.026 "dhchap_dhgroups": [ 00:35:30.026 "null", 00:35:30.026 "ffdhe2048", 00:35:30.026 "ffdhe3072", 00:35:30.026 "ffdhe4096", 00:35:30.026 "ffdhe6144", 00:35:30.026 "ffdhe8192" 00:35:30.026 ] 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_nvme_set_hotplug", 00:35:30.026 "params": { 00:35:30.026 "period_us": 100000, 00:35:30.026 "enable": false 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_malloc_create", 00:35:30.026 "params": { 00:35:30.026 "name": "malloc0", 00:35:30.026 "num_blocks": 8192, 00:35:30.026 "block_size": 4096, 00:35:30.026 "physical_block_size": 4096, 00:35:30.026 "uuid": "286d9c27-330f-483d-897f-915bfe6da6e4", 00:35:30.026 "optimal_io_boundary": 0, 00:35:30.026 "md_size": 0, 00:35:30.026 "dif_type": 0, 00:35:30.026 "dif_is_head_of_md": false, 00:35:30.026 "dif_pi_format": 0 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "bdev_wait_for_examine" 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "scsi", 00:35:30.026 "config": null 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "scheduler", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "framework_set_scheduler", 00:35:30.026 "params": { 00:35:30.026 "name": "static" 00:35:30.026 } 00:35:30.026 } 00:35:30.026 ] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "vhost_scsi", 00:35:30.026 "config": [] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "vhost_blk", 00:35:30.026 "config": [] 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "subsystem": "ublk", 00:35:30.026 "config": [ 00:35:30.026 { 00:35:30.026 "method": "ublk_create_target", 00:35:30.026 "params": { 00:35:30.026 "cpumask": "1" 00:35:30.026 } 00:35:30.026 }, 00:35:30.026 { 00:35:30.026 "method": "ublk_start_disk", 00:35:30.026 "params": { 00:35:30.026 "bdev_name": "malloc0", 00:35:30.026 "ublk_id": 0, 00:35:30.026 "num_queues": 1, 00:35:30.027 "queue_depth": 128 00:35:30.027 } 00:35:30.027 } 00:35:30.027 ] 00:35:30.027 }, 00:35:30.027 { 00:35:30.027 "subsystem": "nbd", 00:35:30.027 "config": [] 00:35:30.027 }, 00:35:30.027 { 00:35:30.027 "subsystem": "nvmf", 00:35:30.027 "config": [ 00:35:30.027 { 00:35:30.027 "method": "nvmf_set_config", 00:35:30.027 "params": { 00:35:30.027 "discovery_filter": "match_any", 00:35:30.027 "admin_cmd_passthru": { 00:35:30.027 "identify_ctrlr": false 00:35:30.027 }, 00:35:30.027 "dhchap_digests": [ 00:35:30.027 "sha256", 00:35:30.027 "sha384", 00:35:30.027 "sha512" 00:35:30.027 ], 00:35:30.027 "dhchap_dhgroups": [ 00:35:30.027 "null", 00:35:30.027 "ffdhe2048", 00:35:30.027 "ffdhe3072", 00:35:30.027 "ffdhe4096", 00:35:30.027 "ffdhe6144", 00:35:30.027 "ffdhe8192" 00:35:30.027 ] 00:35:30.027 } 00:35:30.027 }, 00:35:30.027 { 00:35:30.027 "method": "nvmf_set_max_subsystems", 00:35:30.027 "params": { 00:35:30.027 "max_subsystems": 1024 00:35:30.027 } 00:35:30.027 }, 00:35:30.027 { 00:35:30.027 "method": "nvmf_set_crdt", 00:35:30.027 "params": { 00:35:30.027 "crdt1": 0, 00:35:30.027 "crdt2": 0, 00:35:30.027 "crdt3": 0 00:35:30.027 } 00:35:30.027 } 00:35:30.027 ] 00:35:30.027 }, 00:35:30.027 { 00:35:30.027 "subsystem": "iscsi", 00:35:30.027 "config": [ 00:35:30.027 { 00:35:30.027 "method": "iscsi_set_options", 00:35:30.027 "params": { 00:35:30.027 "node_base": "iqn.2016-06.io.spdk", 00:35:30.027 "max_sessions": 128, 00:35:30.027 "max_connections_per_session": 2, 00:35:30.027 "max_queue_depth": 64, 00:35:30.027 "default_time2wait": 2, 00:35:30.027 "default_time2retain": 20, 00:35:30.027 "first_burst_length": 8192, 00:35:30.027 "immediate_data": true, 00:35:30.027 "allow_duplicated_isid": false, 00:35:30.027 "error_recovery_level": 0, 00:35:30.027 "nop_timeout": 60, 00:35:30.027 "nop_in_interval": 30, 00:35:30.027 "disable_chap": false, 00:35:30.027 "require_chap": false, 00:35:30.027 "mutual_chap": false, 00:35:30.027 "chap_group": 0, 00:35:30.027 "max_large_datain_per_connection": 64, 00:35:30.027 "max_r2t_per_connection": 4, 00:35:30.027 "pdu_pool_size": 36864, 00:35:30.027 "immediate_data_pool_size": 16384, 00:35:30.027 "data_out_pool_size": 2048 00:35:30.027 } 00:35:30.027 } 00:35:30.027 ] 00:35:30.027 } 00:35:30.027 ] 00:35:30.027 }' 00:35:30.027 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:30.027 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:30.027 13:55:37 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:30.027 [2024-11-20 13:55:37.457951] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:35:30.027 [2024-11-20 13:55:37.458082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76081 ] 00:35:30.027 [2024-11-20 13:55:37.640833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.285 [2024-11-20 13:55:37.773977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:31.271 [2024-11-20 13:55:38.960738] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:31.271 [2024-11-20 13:55:38.961861] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:31.529 [2024-11-20 13:55:38.968895] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:35:31.529 [2024-11-20 13:55:38.968989] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:35:31.529 [2024-11-20 13:55:38.969004] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:31.529 [2024-11-20 13:55:38.969011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:31.529 [2024-11-20 13:55:38.976953] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:31.529 [2024-11-20 13:55:38.976981] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:31.529 [2024-11-20 13:55:38.984768] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:31.529 [2024-11-20 13:55:38.984878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:31.529 [2024-11-20 13:55:39.001748] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 76081 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 76081 ']' 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 76081 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76081 00:35:31.529 killing process with pid 76081 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76081' 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 76081 00:35:31.529 13:55:39 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 76081 00:35:33.429 [2024-11-20 13:55:40.815905] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:33.429 [2024-11-20 13:55:40.844757] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:33.429 [2024-11-20 13:55:40.844935] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:33.429 [2024-11-20 13:55:40.852866] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:33.429 [2024-11-20 13:55:40.852937] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:33.429 [2024-11-20 13:55:40.852948] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:33.429 [2024-11-20 13:55:40.852985] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:33.429 [2024-11-20 13:55:40.853140] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:35.961 13:55:43 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:35:35.961 00:35:35.961 real 0m11.715s 00:35:35.961 user 0m8.728s 00:35:35.961 sys 0m3.748s 00:35:35.961 13:55:43 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:35.961 13:55:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:35:35.961 ************************************ 00:35:35.961 END TEST test_save_ublk_config 00:35:35.961 ************************************ 00:35:35.961 13:55:43 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:35:35.961 13:55:43 ublk -- ublk/ublk.sh@139 -- # spdk_pid=76179 00:35:35.961 13:55:43 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:35.961 13:55:43 ublk -- ublk/ublk.sh@141 -- # waitforlisten 76179 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@835 -- # '[' -z 76179 ']' 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.961 13:55:43 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:35.961 [2024-11-20 13:55:43.248263] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:35:35.961 [2024-11-20 13:55:43.248401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76179 ] 00:35:35.961 [2024-11-20 13:55:43.429040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:35.961 [2024-11-20 13:55:43.569509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:35.961 [2024-11-20 13:55:43.569547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:36.896 13:55:44 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:36.896 13:55:44 ublk -- common/autotest_common.sh@868 -- # return 0 00:35:36.896 13:55:44 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:35:36.896 13:55:44 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:36.896 13:55:44 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:36.896 13:55:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:36.896 ************************************ 00:35:36.896 START TEST test_create_ublk 00:35:36.896 ************************************ 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:35:36.896 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:36.896 [2024-11-20 13:55:44.599760] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:36.896 [2024-11-20 13:55:44.603052] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:36.896 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:35:36.896 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:36.896 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.462 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:35:37.462 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:37.462 [2024-11-20 13:55:44.941948] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:35:37.462 [2024-11-20 13:55:44.942402] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:35:37.462 [2024-11-20 13:55:44.942421] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:37.462 [2024-11-20 13:55:44.942430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:37.462 [2024-11-20 13:55:44.950282] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:37.462 [2024-11-20 13:55:44.950308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:37.462 [2024-11-20 13:55:44.957774] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:37.462 [2024-11-20 13:55:44.958442] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:37.462 [2024-11-20 13:55:44.970778] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.462 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:35:37.462 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:35:37.462 13:55:44 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:37.462 13:55:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:37.462 13:55:45 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:35:37.462 { 00:35:37.462 "ublk_device": "/dev/ublkb0", 00:35:37.462 "id": 0, 00:35:37.462 "queue_depth": 512, 00:35:37.462 "num_queues": 4, 00:35:37.462 "bdev_name": "Malloc0" 00:35:37.462 } 00:35:37.462 ]' 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:35:37.462 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:35:37.721 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:35:37.721 13:55:45 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:35:37.721 13:55:45 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:35:37.721 fio: verification read phase will never start because write phase uses all of runtime 00:35:37.721 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:35:37.721 fio-3.35 00:35:37.721 Starting 1 process 00:35:47.700 00:35:47.700 fio_test: (groupid=0, jobs=1): err= 0: pid=76231: Wed Nov 20 13:55:55 2024 00:35:47.700 write: IOPS=13.2k, BW=51.5MiB/s (54.0MB/s)(515MiB/10001msec); 0 zone resets 00:35:47.700 clat (usec): min=44, max=9669, avg=74.86, stdev=145.26 00:35:47.700 lat (usec): min=44, max=9698, avg=75.39, stdev=145.30 00:35:47.700 clat percentiles (usec): 00:35:47.700 | 1.00th=[ 58], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 62], 00:35:47.700 | 30.00th=[ 63], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 69], 00:35:47.700 | 70.00th=[ 71], 80.00th=[ 74], 90.00th=[ 79], 95.00th=[ 85], 00:35:47.700 | 99.00th=[ 99], 99.50th=[ 114], 99.90th=[ 2999], 99.95th=[ 3818], 00:35:47.700 | 99.99th=[ 4113] 00:35:47.700 bw ( KiB/s): min=19240, max=56648, per=99.84%, avg=52689.26, stdev=8351.51, samples=19 00:35:47.700 iops : min= 4810, max=14162, avg=13172.32, stdev=2087.88, samples=19 00:35:47.700 lat (usec) : 50=0.03%, 100=99.09%, 250=0.59%, 500=0.01%, 750=0.01% 00:35:47.700 lat (usec) : 1000=0.02% 00:35:47.700 lat (msec) : 2=0.09%, 4=0.14%, 10=0.03% 00:35:47.700 cpu : usr=1.51%, sys=10.76%, ctx=131959, majf=0, minf=797 00:35:47.700 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:47.700 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.700 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:47.700 issued rwts: total=0,131953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:47.700 latency : target=0, window=0, percentile=100.00%, depth=1 00:35:47.700 00:35:47.700 Run status group 0 (all jobs): 00:35:47.700 WRITE: bw=51.5MiB/s (54.0MB/s), 51.5MiB/s-51.5MiB/s (54.0MB/s-54.0MB/s), io=515MiB (540MB), run=10001-10001msec 00:35:47.700 00:35:47.700 Disk stats (read/write): 00:35:47.701 ublkb0: ios=0/130474, merge=0/0, ticks=0/8601, in_queue=8602, util=99.11% 00:35:47.960 13:55:55 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.960 [2024-11-20 13:55:55.428845] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:47.960 [2024-11-20 13:55:55.472821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:47.960 [2024-11-20 13:55:55.473565] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:47.960 [2024-11-20 13:55:55.480765] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:47.960 [2024-11-20 13:55:55.481091] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:47.960 [2024-11-20 13:55:55.481109] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.960 13:55:55 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.960 [2024-11-20 13:55:55.504836] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:35:47.960 request: 00:35:47.960 { 00:35:47.960 "ublk_id": 0, 00:35:47.960 "method": "ublk_stop_disk", 00:35:47.960 "req_id": 1 00:35:47.960 } 00:35:47.960 Got JSON-RPC error response 00:35:47.960 response: 00:35:47.960 { 00:35:47.960 "code": -19, 00:35:47.960 "message": "No such device" 00:35:47.960 } 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:47.960 13:55:55 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:47.960 [2024-11-20 13:55:55.519877] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:47.960 [2024-11-20 13:55:55.527751] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:47.960 [2024-11-20 13:55:55.527798] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.960 13:55:55 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.960 13:55:55 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.896 13:55:56 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:35:48.896 ************************************ 00:35:48.896 END TEST test_create_ublk 00:35:48.896 ************************************ 00:35:48.896 13:55:56 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:35:48.896 00:35:48.896 real 0m11.904s 00:35:48.896 user 0m0.517s 00:35:48.896 sys 0m1.176s 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.896 13:55:56 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 13:55:56 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:35:48.896 13:55:56 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:48.896 13:55:56 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:48.896 13:55:56 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 ************************************ 00:35:48.896 START TEST test_create_multi_ublk 00:35:48.896 ************************************ 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 [2024-11-20 13:55:56.554736] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:48.896 [2024-11-20 13:55:56.557909] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:48.896 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.465 [2024-11-20 13:55:56.896909] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:35:49.465 [2024-11-20 13:55:56.897370] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:35:49.465 [2024-11-20 13:55:56.897387] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:35:49.465 [2024-11-20 13:55:56.897401] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:35:49.465 [2024-11-20 13:55:56.905252] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:49.465 [2024-11-20 13:55:56.905281] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:49.465 [2024-11-20 13:55:56.912751] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:49.465 [2024-11-20 13:55:56.913421] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:35:49.465 [2024-11-20 13:55:56.923862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.465 13:55:56 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.724 [2024-11-20 13:55:57.272923] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:35:49.724 [2024-11-20 13:55:57.273365] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:35:49.724 [2024-11-20 13:55:57.273384] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:35:49.724 [2024-11-20 13:55:57.273392] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:35:49.724 [2024-11-20 13:55:57.280784] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:49.724 [2024-11-20 13:55:57.280839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:49.724 [2024-11-20 13:55:57.288780] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:49.724 [2024-11-20 13:55:57.289482] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:35:49.724 [2024-11-20 13:55:57.294445] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.724 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:49.984 [2024-11-20 13:55:57.664912] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:35:49.984 [2024-11-20 13:55:57.665401] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:35:49.984 [2024-11-20 13:55:57.665417] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:35:49.984 [2024-11-20 13:55:57.665428] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:35:49.984 [2024-11-20 13:55:57.672787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:49.984 [2024-11-20 13:55:57.672823] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:49.984 [2024-11-20 13:55:57.680759] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:49.984 [2024-11-20 13:55:57.681507] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:35:49.984 [2024-11-20 13:55:57.687744] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.984 13:55:57 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:50.552 [2024-11-20 13:55:58.040942] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:35:50.552 [2024-11-20 13:55:58.041416] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:35:50.552 [2024-11-20 13:55:58.041436] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:35:50.552 [2024-11-20 13:55:58.041444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:35:50.552 [2024-11-20 13:55:58.048364] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:50.552 [2024-11-20 13:55:58.048390] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:50.552 [2024-11-20 13:55:58.055769] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:50.552 [2024-11-20 13:55:58.056519] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:35:50.552 [2024-11-20 13:55:58.072775] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:35:50.552 { 00:35:50.552 "ublk_device": "/dev/ublkb0", 00:35:50.552 "id": 0, 00:35:50.552 "queue_depth": 512, 00:35:50.552 "num_queues": 4, 00:35:50.552 "bdev_name": "Malloc0" 00:35:50.552 }, 00:35:50.552 { 00:35:50.552 "ublk_device": "/dev/ublkb1", 00:35:50.552 "id": 1, 00:35:50.552 "queue_depth": 512, 00:35:50.552 "num_queues": 4, 00:35:50.552 "bdev_name": "Malloc1" 00:35:50.552 }, 00:35:50.552 { 00:35:50.552 "ublk_device": "/dev/ublkb2", 00:35:50.552 "id": 2, 00:35:50.552 "queue_depth": 512, 00:35:50.552 "num_queues": 4, 00:35:50.552 "bdev_name": "Malloc2" 00:35:50.552 }, 00:35:50.552 { 00:35:50.552 "ublk_device": "/dev/ublkb3", 00:35:50.552 "id": 3, 00:35:50.552 "queue_depth": 512, 00:35:50.552 "num_queues": 4, 00:35:50.552 "bdev_name": "Malloc3" 00:35:50.552 } 00:35:50.552 ]' 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:35:50.552 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:50.810 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:35:51.069 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.327 13:55:58 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.327 [2024-11-20 13:55:58.937906] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:35:51.327 [2024-11-20 13:55:58.981789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:51.327 [2024-11-20 13:55:58.982821] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:35:51.327 [2024-11-20 13:55:58.993997] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:51.327 [2024-11-20 13:55:58.994340] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:35:51.327 [2024-11-20 13:55:58.994359] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:35:51.327 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.327 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.327 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:35:51.327 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.327 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.327 [2024-11-20 13:55:59.009844] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:35:51.327 [2024-11-20 13:55:59.041350] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:51.327 [2024-11-20 13:55:59.042503] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:35:51.688 [2024-11-20 13:55:59.047777] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:51.688 [2024-11-20 13:55:59.048123] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:35:51.688 [2024-11-20 13:55:59.048143] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.688 [2024-11-20 13:55:59.060964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:35:51.688 [2024-11-20 13:55:59.093289] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:51.688 [2024-11-20 13:55:59.094218] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:35:51.688 [2024-11-20 13:55:59.103794] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:51.688 [2024-11-20 13:55:59.104132] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:35:51.688 [2024-11-20 13:55:59.104149] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:51.688 [2024-11-20 13:55:59.119894] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:35:51.688 [2024-11-20 13:55:59.162814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:35:51.688 [2024-11-20 13:55:59.163622] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:35:51.688 [2024-11-20 13:55:59.169789] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:35:51.688 [2024-11-20 13:55:59.170197] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:35:51.688 [2024-11-20 13:55:59.170216] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:35:51.688 [2024-11-20 13:55:59.365919] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:51.688 [2024-11-20 13:55:59.374020] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:51.688 [2024-11-20 13:55:59.374083] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:51.688 13:55:59 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:52.621 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:52.621 13:56:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:52.621 13:56:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:35:52.621 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:52.621 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:53.189 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.189 13:56:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:53.189 13:56:00 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:35:53.189 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.189 13:56:00 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:53.448 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.448 13:56:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:35:53.448 13:56:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:35:53.448 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.448 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:35:54.016 ************************************ 00:35:54.016 END TEST test_create_multi_ublk 00:35:54.016 ************************************ 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:35:54.016 00:35:54.016 real 0m5.013s 00:35:54.016 user 0m1.012s 00:35:54.016 sys 0m0.179s 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:54.016 13:56:01 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:35:54.016 13:56:01 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:54.016 13:56:01 ublk -- ublk/ublk.sh@147 -- # cleanup 00:35:54.016 13:56:01 ublk -- ublk/ublk.sh@130 -- # killprocess 76179 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@954 -- # '[' -z 76179 ']' 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@958 -- # kill -0 76179 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@959 -- # uname 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76179 00:35:54.016 killing process with pid 76179 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76179' 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@973 -- # kill 76179 00:35:54.016 13:56:01 ublk -- common/autotest_common.sh@978 -- # wait 76179 00:35:55.394 [2024-11-20 13:56:02.987076] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:35:55.394 [2024-11-20 13:56:02.987176] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:35:56.775 00:35:56.775 real 0m33.277s 00:35:56.775 user 0m46.430s 00:35:56.775 sys 0m11.244s 00:35:56.775 13:56:04 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.775 13:56:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:35:56.775 ************************************ 00:35:56.775 END TEST ublk 00:35:56.775 ************************************ 00:35:56.775 13:56:04 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:35:56.775 13:56:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:56.775 13:56:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.775 13:56:04 -- common/autotest_common.sh@10 -- # set +x 00:35:56.775 ************************************ 00:35:56.775 START TEST ublk_recovery 00:35:56.775 ************************************ 00:35:56.775 13:56:04 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:35:57.036 * Looking for test storage... 00:35:57.036 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1693 -- # lcov --version 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.036 13:56:04 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.036 --rc genhtml_branch_coverage=1 00:35:57.036 --rc genhtml_function_coverage=1 00:35:57.036 --rc genhtml_legend=1 00:35:57.036 --rc geninfo_all_blocks=1 00:35:57.036 --rc geninfo_unexecuted_blocks=1 00:35:57.036 00:35:57.036 ' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.036 --rc genhtml_branch_coverage=1 00:35:57.036 --rc genhtml_function_coverage=1 00:35:57.036 --rc genhtml_legend=1 00:35:57.036 --rc geninfo_all_blocks=1 00:35:57.036 --rc geninfo_unexecuted_blocks=1 00:35:57.036 00:35:57.036 ' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.036 --rc genhtml_branch_coverage=1 00:35:57.036 --rc genhtml_function_coverage=1 00:35:57.036 --rc genhtml_legend=1 00:35:57.036 --rc geninfo_all_blocks=1 00:35:57.036 --rc geninfo_unexecuted_blocks=1 00:35:57.036 00:35:57.036 ' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:57.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.036 --rc genhtml_branch_coverage=1 00:35:57.036 --rc genhtml_function_coverage=1 00:35:57.036 --rc genhtml_legend=1 00:35:57.036 --rc geninfo_all_blocks=1 00:35:57.036 --rc geninfo_unexecuted_blocks=1 00:35:57.036 00:35:57.036 ' 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:35:57.036 13:56:04 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=76610 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:35:57.036 13:56:04 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 76610 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76610 ']' 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.036 13:56:04 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:57.297 [2024-11-20 13:56:04.811485] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:35:57.297 [2024-11-20 13:56:04.811756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76610 ] 00:35:57.297 [2024-11-20 13:56:04.994177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:57.556 [2024-11-20 13:56:05.143980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.556 [2024-11-20 13:56:05.144025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:35:58.493 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:58.493 [2024-11-20 13:56:06.192765] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:35:58.493 [2024-11-20 13:56:06.196216] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.493 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.493 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:58.752 malloc0 00:35:58.752 13:56:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.752 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:35:58.752 13:56:06 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:58.752 13:56:06 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:35:58.752 [2024-11-20 13:56:06.373966] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:35:58.752 [2024-11-20 13:56:06.374116] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:35:58.752 [2024-11-20 13:56:06.374129] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:35:58.752 [2024-11-20 13:56:06.374140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:35:58.753 [2024-11-20 13:56:06.382009] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:35:58.753 [2024-11-20 13:56:06.382038] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:35:58.753 [2024-11-20 13:56:06.389807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:35:58.753 [2024-11-20 13:56:06.390001] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:35:58.753 [2024-11-20 13:56:06.405835] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:35:58.753 1 00:35:58.753 13:56:06 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:58.753 13:56:06 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:36:00.132 13:56:07 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=76656 00:36:00.132 13:56:07 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:36:00.132 13:56:07 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:36:00.132 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:36:00.132 fio-3.35 00:36:00.132 Starting 1 process 00:36:05.436 13:56:12 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 76610 00:36:05.436 13:56:12 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:36:10.744 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 76610 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:36:10.744 13:56:17 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=76762 00:36:10.744 13:56:17 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:36:10.744 13:56:17 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:36:10.744 13:56:17 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 76762 00:36:10.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 76762 ']' 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:10.744 13:56:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:10.744 [2024-11-20 13:56:17.545973] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:36:10.744 [2024-11-20 13:56:17.546094] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76762 ] 00:36:10.744 [2024-11-20 13:56:17.722040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:10.744 [2024-11-20 13:56:17.866633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:10.744 [2024-11-20 13:56:17.866667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:36:11.313 13:56:18 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:11.313 [2024-11-20 13:56:18.878779] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:36:11.313 [2024-11-20 13:56:18.882118] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.313 13:56:18 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.313 13:56:18 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:11.572 malloc0 00:36:11.572 13:56:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.572 13:56:19 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:36:11.572 13:56:19 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:11.572 13:56:19 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:36:11.572 [2024-11-20 13:56:19.065917] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:36:11.572 [2024-11-20 13:56:19.065968] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:36:11.573 [2024-11-20 13:56:19.065980] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:36:11.573 [2024-11-20 13:56:19.073811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:36:11.573 [2024-11-20 13:56:19.073839] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:36:11.573 [2024-11-20 13:56:19.073848] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:36:11.573 [2024-11-20 13:56:19.073942] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:36:11.573 1 00:36:11.573 13:56:19 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:11.573 13:56:19 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 76656 00:36:11.573 [2024-11-20 13:56:19.081754] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:36:11.573 [2024-11-20 13:56:19.088431] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:36:11.573 [2024-11-20 13:56:19.094956] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:36:11.573 [2024-11-20 13:56:19.094981] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:37:07.816 00:37:07.816 fio_test: (groupid=0, jobs=1): err= 0: pid=76659: Wed Nov 20 13:57:07 2024 00:37:07.816 read: IOPS=20.2k, BW=78.9MiB/s (82.7MB/s)(4734MiB/60002msec) 00:37:07.816 slat (nsec): min=1536, max=459150, avg=8445.72, stdev=2957.29 00:37:07.816 clat (usec): min=1280, max=6682.8k, avg=3141.19, stdev=50386.73 00:37:07.816 lat (usec): min=1288, max=6682.8k, avg=3149.64, stdev=50386.74 00:37:07.816 clat percentiles (usec): 00:37:07.816 | 1.00th=[ 2147], 5.00th=[ 2311], 10.00th=[ 2376], 20.00th=[ 2442], 00:37:07.816 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2638], 00:37:07.816 | 70.00th=[ 2769], 80.00th=[ 2966], 90.00th=[ 3294], 95.00th=[ 3982], 00:37:07.816 | 99.00th=[ 5211], 99.50th=[ 5800], 99.90th=[ 7242], 99.95th=[ 7898], 00:37:07.816 | 99.99th=[12780] 00:37:07.816 bw ( KiB/s): min= 4552, max=98792, per=100.00%, avg=89930.54, stdev=12318.38, samples=107 00:37:07.816 iops : min= 1138, max=24698, avg=22482.61, stdev=3079.60, samples=107 00:37:07.816 write: IOPS=20.2k, BW=78.8MiB/s (82.7MB/s)(4731MiB/60002msec); 0 zone resets 00:37:07.816 slat (usec): min=2, max=2483, avg= 8.84, stdev= 3.76 00:37:07.816 clat (usec): min=1365, max=6683.1k, avg=3178.79, stdev=46610.67 00:37:07.816 lat (usec): min=1374, max=6683.1k, avg=3187.63, stdev=46610.67 00:37:07.816 clat percentiles (usec): 00:37:07.816 | 1.00th=[ 2147], 5.00th=[ 2343], 10.00th=[ 2442], 20.00th=[ 2540], 00:37:07.816 | 30.00th=[ 2606], 40.00th=[ 2638], 50.00th=[ 2671], 60.00th=[ 2737], 00:37:07.816 | 70.00th=[ 2868], 80.00th=[ 3064], 90.00th=[ 3359], 95.00th=[ 3982], 00:37:07.816 | 99.00th=[ 5276], 99.50th=[ 5866], 99.90th=[ 7439], 99.95th=[ 8029], 00:37:07.816 | 99.99th=[12911] 00:37:07.816 bw ( KiB/s): min= 4784, max=98168, per=100.00%, avg=89865.90, stdev=12323.84, samples=107 00:37:07.816 iops : min= 1196, max=24542, avg=22466.44, stdev=3080.96, samples=107 00:37:07.816 lat (msec) : 2=0.26%, 4=94.83%, 10=4.90%, 20=0.01%, >=2000=0.01% 00:37:07.816 cpu : usr=8.20%, sys=35.18%, ctx=99621, majf=0, minf=13 00:37:07.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:37:07.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:07.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:37:07.816 issued rwts: total=1212019,1211178,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:07.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:37:07.816 00:37:07.816 Run status group 0 (all jobs): 00:37:07.816 READ: bw=78.9MiB/s (82.7MB/s), 78.9MiB/s-78.9MiB/s (82.7MB/s-82.7MB/s), io=4734MiB (4964MB), run=60002-60002msec 00:37:07.816 WRITE: bw=78.8MiB/s (82.7MB/s), 78.8MiB/s-78.8MiB/s (82.7MB/s-82.7MB/s), io=4731MiB (4961MB), run=60002-60002msec 00:37:07.816 00:37:07.816 Disk stats (read/write): 00:37:07.816 ublkb1: ios=1209798/1208942, merge=0/0, ticks=3696505/3606419, in_queue=7302924, util=99.96% 00:37:07.816 13:57:07 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:37:07.816 13:57:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.816 13:57:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.816 [2024-11-20 13:57:07.701337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:37:07.816 [2024-11-20 13:57:07.739787] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:37:07.816 [2024-11-20 13:57:07.740043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:37:07.816 [2024-11-20 13:57:07.747779] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:37:07.816 [2024-11-20 13:57:07.747924] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:37:07.816 [2024-11-20 13:57:07.747937] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:37:07.816 13:57:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.816 13:57:07 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.817 [2024-11-20 13:57:07.763891] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:07.817 [2024-11-20 13:57:07.771748] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:07.817 [2024-11-20 13:57:07.771790] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.817 13:57:07 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:37:07.817 13:57:07 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:37:07.817 13:57:07 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 76762 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 76762 ']' 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 76762 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76762 00:37:07.817 killing process with pid 76762 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76762' 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@973 -- # kill 76762 00:37:07.817 13:57:07 ublk_recovery -- common/autotest_common.sh@978 -- # wait 76762 00:37:07.817 [2024-11-20 13:57:09.562425] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:37:07.817 [2024-11-20 13:57:09.562528] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:37:07.817 ************************************ 00:37:07.817 END TEST ublk_recovery 00:37:07.817 ************************************ 00:37:07.817 00:37:07.817 real 1m6.705s 00:37:07.817 user 1m47.239s 00:37:07.817 sys 0m41.127s 00:37:07.817 13:57:11 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:07.817 13:57:11 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:37:07.817 13:57:11 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:37:07.817 13:57:11 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@260 -- # timing_exit lib 00:37:07.817 13:57:11 -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.817 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:37:07.817 13:57:11 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:37:07.817 13:57:11 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:07.817 13:57:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:07.817 13:57:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:07.817 13:57:11 -- common/autotest_common.sh@10 -- # set +x 00:37:07.817 ************************************ 00:37:07.817 START TEST ftl 00:37:07.817 ************************************ 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:07.817 * Looking for test storage... 00:37:07.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1693 -- # lcov --version 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:07.817 13:57:11 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:07.817 13:57:11 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:37:07.817 13:57:11 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:37:07.817 13:57:11 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:37:07.817 13:57:11 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:07.817 13:57:11 ftl -- scripts/common.sh@344 -- # case "$op" in 00:37:07.817 13:57:11 ftl -- scripts/common.sh@345 -- # : 1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:07.817 13:57:11 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:07.817 13:57:11 ftl -- scripts/common.sh@365 -- # decimal 1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@353 -- # local d=1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:07.817 13:57:11 ftl -- scripts/common.sh@355 -- # echo 1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:37:07.817 13:57:11 ftl -- scripts/common.sh@366 -- # decimal 2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@353 -- # local d=2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:07.817 13:57:11 ftl -- scripts/common.sh@355 -- # echo 2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:37:07.817 13:57:11 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:07.817 13:57:11 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:07.817 13:57:11 ftl -- scripts/common.sh@368 -- # return 0 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.817 --rc genhtml_branch_coverage=1 00:37:07.817 --rc genhtml_function_coverage=1 00:37:07.817 --rc genhtml_legend=1 00:37:07.817 --rc geninfo_all_blocks=1 00:37:07.817 --rc geninfo_unexecuted_blocks=1 00:37:07.817 00:37:07.817 ' 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.817 --rc genhtml_branch_coverage=1 00:37:07.817 --rc genhtml_function_coverage=1 00:37:07.817 --rc genhtml_legend=1 00:37:07.817 --rc geninfo_all_blocks=1 00:37:07.817 --rc geninfo_unexecuted_blocks=1 00:37:07.817 00:37:07.817 ' 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.817 --rc genhtml_branch_coverage=1 00:37:07.817 --rc genhtml_function_coverage=1 00:37:07.817 --rc genhtml_legend=1 00:37:07.817 --rc geninfo_all_blocks=1 00:37:07.817 --rc geninfo_unexecuted_blocks=1 00:37:07.817 00:37:07.817 ' 00:37:07.817 13:57:11 ftl -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:07.817 --rc genhtml_branch_coverage=1 00:37:07.817 --rc genhtml_function_coverage=1 00:37:07.817 --rc genhtml_legend=1 00:37:07.817 --rc geninfo_all_blocks=1 00:37:07.817 --rc geninfo_unexecuted_blocks=1 00:37:07.817 00:37:07.817 ' 00:37:07.817 13:57:11 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:07.817 13:57:11 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:37:07.817 13:57:11 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:07.817 13:57:11 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:07.817 13:57:11 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:07.817 13:57:11 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:07.817 13:57:11 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:07.817 13:57:11 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:07.817 13:57:11 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:07.817 13:57:11 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:07.817 13:57:11 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:07.817 13:57:11 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:07.817 13:57:11 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:07.818 13:57:11 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:07.818 13:57:11 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:07.818 13:57:11 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:07.818 13:57:11 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:07.818 13:57:11 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:07.818 13:57:11 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:07.818 13:57:11 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:07.818 13:57:11 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:07.818 13:57:11 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:07.818 13:57:11 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:07.818 13:57:11 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:07.818 13:57:11 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:07.818 13:57:11 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:07.818 13:57:11 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:07.818 13:57:11 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.818 13:57:11 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:37:07.818 13:57:11 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:37:07.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:37:07.818 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:07.818 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:07.818 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:07.818 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:37:07.818 13:57:12 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=77568 00:37:07.818 13:57:12 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:37:07.818 13:57:12 ftl -- ftl/ftl.sh@38 -- # waitforlisten 77568 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@835 -- # '[' -z 77568 ']' 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:07.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:07.818 13:57:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:07.818 [2024-11-20 13:57:12.489369] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:37:07.818 [2024-11-20 13:57:12.489982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77568 ] 00:37:07.818 [2024-11-20 13:57:12.671434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:07.818 [2024-11-20 13:57:12.810612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:07.818 13:57:13 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.818 13:57:13 ftl -- common/autotest_common.sh@868 -- # return 0 00:37:07.818 13:57:13 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:37:07.818 13:57:13 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:37:07.818 13:57:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:37:07.818 13:57:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@50 -- # break 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:37:07.818 13:57:15 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:37:08.077 13:57:15 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:37:08.077 13:57:15 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:37:08.077 13:57:15 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:37:08.077 13:57:15 ftl -- ftl/ftl.sh@63 -- # break 00:37:08.077 13:57:15 ftl -- ftl/ftl.sh@66 -- # killprocess 77568 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@954 -- # '[' -z 77568 ']' 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@958 -- # kill -0 77568 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@959 -- # uname 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77568 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77568' 00:37:08.077 killing process with pid 77568 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@973 -- # kill 77568 00:37:08.077 13:57:15 ftl -- common/autotest_common.sh@978 -- # wait 77568 00:37:10.612 13:57:18 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:37:10.612 13:57:18 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:37:10.612 13:57:18 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:37:10.612 13:57:18 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:10.612 13:57:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:37:10.871 ************************************ 00:37:10.871 START TEST ftl_fio_basic 00:37:10.871 ************************************ 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:37:10.871 * Looking for test storage... 00:37:10.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lcov --version 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.871 --rc genhtml_branch_coverage=1 00:37:10.871 --rc genhtml_function_coverage=1 00:37:10.871 --rc genhtml_legend=1 00:37:10.871 --rc geninfo_all_blocks=1 00:37:10.871 --rc geninfo_unexecuted_blocks=1 00:37:10.871 00:37:10.871 ' 00:37:10.871 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:10.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.871 --rc genhtml_branch_coverage=1 00:37:10.871 --rc genhtml_function_coverage=1 00:37:10.871 --rc genhtml_legend=1 00:37:10.871 --rc geninfo_all_blocks=1 00:37:10.871 --rc geninfo_unexecuted_blocks=1 00:37:10.871 00:37:10.871 ' 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:10.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.872 --rc genhtml_branch_coverage=1 00:37:10.872 --rc genhtml_function_coverage=1 00:37:10.872 --rc genhtml_legend=1 00:37:10.872 --rc geninfo_all_blocks=1 00:37:10.872 --rc geninfo_unexecuted_blocks=1 00:37:10.872 00:37:10.872 ' 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:10.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:10.872 --rc genhtml_branch_coverage=1 00:37:10.872 --rc genhtml_function_coverage=1 00:37:10.872 --rc genhtml_legend=1 00:37:10.872 --rc geninfo_all_blocks=1 00:37:10.872 --rc geninfo_unexecuted_blocks=1 00:37:10.872 00:37:10.872 ' 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:37:10.872 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=77720 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 77720 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 77720 ']' 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:11.131 13:57:18 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:11.132 [2024-11-20 13:57:18.716589] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:37:11.132 [2024-11-20 13:57:18.716890] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77720 ] 00:37:11.450 [2024-11-20 13:57:18.893452] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:11.450 [2024-11-20 13:57:19.037650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:11.450 [2024-11-20 13:57:19.037811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:11.450 [2024-11-20 13:57:19.037851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:37:12.428 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:12.997 { 00:37:12.997 "name": "nvme0n1", 00:37:12.997 "aliases": [ 00:37:12.997 "011ca0f9-3a98-49fe-a1aa-3cddc40411ba" 00:37:12.997 ], 00:37:12.997 "product_name": "NVMe disk", 00:37:12.997 "block_size": 4096, 00:37:12.997 "num_blocks": 1310720, 00:37:12.997 "uuid": "011ca0f9-3a98-49fe-a1aa-3cddc40411ba", 00:37:12.997 "numa_id": -1, 00:37:12.997 "assigned_rate_limits": { 00:37:12.997 "rw_ios_per_sec": 0, 00:37:12.997 "rw_mbytes_per_sec": 0, 00:37:12.997 "r_mbytes_per_sec": 0, 00:37:12.997 "w_mbytes_per_sec": 0 00:37:12.997 }, 00:37:12.997 "claimed": false, 00:37:12.997 "zoned": false, 00:37:12.997 "supported_io_types": { 00:37:12.997 "read": true, 00:37:12.997 "write": true, 00:37:12.997 "unmap": true, 00:37:12.997 "flush": true, 00:37:12.997 "reset": true, 00:37:12.997 "nvme_admin": true, 00:37:12.997 "nvme_io": true, 00:37:12.997 "nvme_io_md": false, 00:37:12.997 "write_zeroes": true, 00:37:12.997 "zcopy": false, 00:37:12.997 "get_zone_info": false, 00:37:12.997 "zone_management": false, 00:37:12.997 "zone_append": false, 00:37:12.997 "compare": true, 00:37:12.997 "compare_and_write": false, 00:37:12.997 "abort": true, 00:37:12.997 "seek_hole": false, 00:37:12.997 "seek_data": false, 00:37:12.997 "copy": true, 00:37:12.997 "nvme_iov_md": false 00:37:12.997 }, 00:37:12.997 "driver_specific": { 00:37:12.997 "nvme": [ 00:37:12.997 { 00:37:12.997 "pci_address": "0000:00:11.0", 00:37:12.997 "trid": { 00:37:12.997 "trtype": "PCIe", 00:37:12.997 "traddr": "0000:00:11.0" 00:37:12.997 }, 00:37:12.997 "ctrlr_data": { 00:37:12.997 "cntlid": 0, 00:37:12.997 "vendor_id": "0x1b36", 00:37:12.997 "model_number": "QEMU NVMe Ctrl", 00:37:12.997 "serial_number": "12341", 00:37:12.997 "firmware_revision": "8.0.0", 00:37:12.997 "subnqn": "nqn.2019-08.org.qemu:12341", 00:37:12.997 "oacs": { 00:37:12.997 "security": 0, 00:37:12.997 "format": 1, 00:37:12.997 "firmware": 0, 00:37:12.997 "ns_manage": 1 00:37:12.997 }, 00:37:12.997 "multi_ctrlr": false, 00:37:12.997 "ana_reporting": false 00:37:12.997 }, 00:37:12.997 "vs": { 00:37:12.997 "nvme_version": "1.4" 00:37:12.997 }, 00:37:12.997 "ns_data": { 00:37:12.997 "id": 1, 00:37:12.997 "can_share": false 00:37:12.997 } 00:37:12.997 } 00:37:12.997 ], 00:37:12.997 "mp_policy": "active_passive" 00:37:12.997 } 00:37:12.997 } 00:37:12.997 ]' 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:12.997 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:37:13.257 13:57:20 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:37:13.516 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=e3d7cb4a-423e-460f-9629-586fb4a8c35d 00:37:13.516 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u e3d7cb4a-423e-460f-9629-586fb4a8c35d 00:37:13.775 13:57:21 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=95632865-bcde-4b67-8d37-235b4ea50faa 00:37:13.775 13:57:21 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:13.775 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=95632865-bcde-4b67-8d37-235b4ea50faa 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=95632865-bcde-4b67-8d37-235b4ea50faa 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:13.776 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:14.035 { 00:37:14.035 "name": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:14.035 "aliases": [ 00:37:14.035 "lvs/nvme0n1p0" 00:37:14.035 ], 00:37:14.035 "product_name": "Logical Volume", 00:37:14.035 "block_size": 4096, 00:37:14.035 "num_blocks": 26476544, 00:37:14.035 "uuid": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:14.035 "assigned_rate_limits": { 00:37:14.035 "rw_ios_per_sec": 0, 00:37:14.035 "rw_mbytes_per_sec": 0, 00:37:14.035 "r_mbytes_per_sec": 0, 00:37:14.035 "w_mbytes_per_sec": 0 00:37:14.035 }, 00:37:14.035 "claimed": false, 00:37:14.035 "zoned": false, 00:37:14.035 "supported_io_types": { 00:37:14.035 "read": true, 00:37:14.035 "write": true, 00:37:14.035 "unmap": true, 00:37:14.035 "flush": false, 00:37:14.035 "reset": true, 00:37:14.035 "nvme_admin": false, 00:37:14.035 "nvme_io": false, 00:37:14.035 "nvme_io_md": false, 00:37:14.035 "write_zeroes": true, 00:37:14.035 "zcopy": false, 00:37:14.035 "get_zone_info": false, 00:37:14.035 "zone_management": false, 00:37:14.035 "zone_append": false, 00:37:14.035 "compare": false, 00:37:14.035 "compare_and_write": false, 00:37:14.035 "abort": false, 00:37:14.035 "seek_hole": true, 00:37:14.035 "seek_data": true, 00:37:14.035 "copy": false, 00:37:14.035 "nvme_iov_md": false 00:37:14.035 }, 00:37:14.035 "driver_specific": { 00:37:14.035 "lvol": { 00:37:14.035 "lvol_store_uuid": "e3d7cb4a-423e-460f-9629-586fb4a8c35d", 00:37:14.035 "base_bdev": "nvme0n1", 00:37:14.035 "thin_provision": true, 00:37:14.035 "num_allocated_clusters": 0, 00:37:14.035 "snapshot": false, 00:37:14.035 "clone": false, 00:37:14.035 "esnap_clone": false 00:37:14.035 } 00:37:14.035 } 00:37:14.035 } 00:37:14.035 ]' 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:37:14.035 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:14.295 13:57:21 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.554 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:14.554 { 00:37:14.554 "name": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:14.554 "aliases": [ 00:37:14.554 "lvs/nvme0n1p0" 00:37:14.554 ], 00:37:14.554 "product_name": "Logical Volume", 00:37:14.554 "block_size": 4096, 00:37:14.554 "num_blocks": 26476544, 00:37:14.554 "uuid": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:14.554 "assigned_rate_limits": { 00:37:14.554 "rw_ios_per_sec": 0, 00:37:14.554 "rw_mbytes_per_sec": 0, 00:37:14.554 "r_mbytes_per_sec": 0, 00:37:14.554 "w_mbytes_per_sec": 0 00:37:14.554 }, 00:37:14.554 "claimed": false, 00:37:14.554 "zoned": false, 00:37:14.554 "supported_io_types": { 00:37:14.554 "read": true, 00:37:14.554 "write": true, 00:37:14.554 "unmap": true, 00:37:14.554 "flush": false, 00:37:14.554 "reset": true, 00:37:14.554 "nvme_admin": false, 00:37:14.554 "nvme_io": false, 00:37:14.554 "nvme_io_md": false, 00:37:14.554 "write_zeroes": true, 00:37:14.554 "zcopy": false, 00:37:14.554 "get_zone_info": false, 00:37:14.554 "zone_management": false, 00:37:14.554 "zone_append": false, 00:37:14.554 "compare": false, 00:37:14.554 "compare_and_write": false, 00:37:14.554 "abort": false, 00:37:14.554 "seek_hole": true, 00:37:14.554 "seek_data": true, 00:37:14.554 "copy": false, 00:37:14.554 "nvme_iov_md": false 00:37:14.554 }, 00:37:14.554 "driver_specific": { 00:37:14.554 "lvol": { 00:37:14.554 "lvol_store_uuid": "e3d7cb4a-423e-460f-9629-586fb4a8c35d", 00:37:14.554 "base_bdev": "nvme0n1", 00:37:14.554 "thin_provision": true, 00:37:14.554 "num_allocated_clusters": 0, 00:37:14.554 "snapshot": false, 00:37:14.554 "clone": false, 00:37:14.554 "esnap_clone": false 00:37:14.554 } 00:37:14.554 } 00:37:14.554 } 00:37:14.554 ]' 00:37:14.554 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:37:14.555 13:57:22 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:37:14.814 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=95632865-bcde-4b67-8d37-235b4ea50faa 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:37:14.814 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95632865-bcde-4b67-8d37-235b4ea50faa 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:37:15.074 { 00:37:15.074 "name": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:15.074 "aliases": [ 00:37:15.074 "lvs/nvme0n1p0" 00:37:15.074 ], 00:37:15.074 "product_name": "Logical Volume", 00:37:15.074 "block_size": 4096, 00:37:15.074 "num_blocks": 26476544, 00:37:15.074 "uuid": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:15.074 "assigned_rate_limits": { 00:37:15.074 "rw_ios_per_sec": 0, 00:37:15.074 "rw_mbytes_per_sec": 0, 00:37:15.074 "r_mbytes_per_sec": 0, 00:37:15.074 "w_mbytes_per_sec": 0 00:37:15.074 }, 00:37:15.074 "claimed": false, 00:37:15.074 "zoned": false, 00:37:15.074 "supported_io_types": { 00:37:15.074 "read": true, 00:37:15.074 "write": true, 00:37:15.074 "unmap": true, 00:37:15.074 "flush": false, 00:37:15.074 "reset": true, 00:37:15.074 "nvme_admin": false, 00:37:15.074 "nvme_io": false, 00:37:15.074 "nvme_io_md": false, 00:37:15.074 "write_zeroes": true, 00:37:15.074 "zcopy": false, 00:37:15.074 "get_zone_info": false, 00:37:15.074 "zone_management": false, 00:37:15.074 "zone_append": false, 00:37:15.074 "compare": false, 00:37:15.074 "compare_and_write": false, 00:37:15.074 "abort": false, 00:37:15.074 "seek_hole": true, 00:37:15.074 "seek_data": true, 00:37:15.074 "copy": false, 00:37:15.074 "nvme_iov_md": false 00:37:15.074 }, 00:37:15.074 "driver_specific": { 00:37:15.074 "lvol": { 00:37:15.074 "lvol_store_uuid": "e3d7cb4a-423e-460f-9629-586fb4a8c35d", 00:37:15.074 "base_bdev": "nvme0n1", 00:37:15.074 "thin_provision": true, 00:37:15.074 "num_allocated_clusters": 0, 00:37:15.074 "snapshot": false, 00:37:15.074 "clone": false, 00:37:15.074 "esnap_clone": false 00:37:15.074 } 00:37:15.074 } 00:37:15.074 } 00:37:15.074 ]' 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:37:15.074 13:57:22 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:37:15.334 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:37:15.334 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:37:15.334 13:57:22 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 95632865-bcde-4b67-8d37-235b4ea50faa -c nvc0n1p0 --l2p_dram_limit 60 00:37:15.334 [2024-11-20 13:57:22.960060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.960116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:37:15.334 [2024-11-20 13:57:22.960133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:37:15.334 [2024-11-20 13:57:22.960141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.960268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.960284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:15.334 [2024-11-20 13:57:22.960295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:37:15.334 [2024-11-20 13:57:22.960303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.960363] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:37:15.334 [2024-11-20 13:57:22.961417] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:37:15.334 [2024-11-20 13:57:22.961462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.961472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:15.334 [2024-11-20 13:57:22.961486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.102 ms 00:37:15.334 [2024-11-20 13:57:22.961494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.961606] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c2a45e9c-a48e-4efb-a35c-1f5ad93e5ae6 00:37:15.334 [2024-11-20 13:57:22.963121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.963250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:37:15.334 [2024-11-20 13:57:22.963265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:37:15.334 [2024-11-20 13:57:22.963276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.970826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.970890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:15.334 [2024-11-20 13:57:22.970919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.434 ms 00:37:15.334 [2024-11-20 13:57:22.970942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.971091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.971142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:15.334 [2024-11-20 13:57:22.971175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:37:15.334 [2024-11-20 13:57:22.971219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.971383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.971427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:37:15.334 [2024-11-20 13:57:22.971457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:37:15.334 [2024-11-20 13:57:22.971488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.334 [2024-11-20 13:57:22.971572] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:37:15.334 [2024-11-20 13:57:22.976694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.334 [2024-11-20 13:57:22.976798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:15.334 [2024-11-20 13:57:22.976850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.144 ms 00:37:15.334 [2024-11-20 13:57:22.976882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.335 [2024-11-20 13:57:22.976970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.335 [2024-11-20 13:57:22.977004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:37:15.335 [2024-11-20 13:57:22.977036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:37:15.335 [2024-11-20 13:57:22.977072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.335 [2024-11-20 13:57:22.977177] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:37:15.335 [2024-11-20 13:57:22.977369] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:37:15.335 [2024-11-20 13:57:22.977429] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:37:15.335 [2024-11-20 13:57:22.977490] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:37:15.335 [2024-11-20 13:57:22.977542] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:37:15.335 [2024-11-20 13:57:22.977589] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:37:15.335 [2024-11-20 13:57:22.977645] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:37:15.335 [2024-11-20 13:57:22.977678] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:37:15.335 [2024-11-20 13:57:22.977710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:37:15.335 [2024-11-20 13:57:22.977750] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:37:15.335 [2024-11-20 13:57:22.977783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.335 [2024-11-20 13:57:22.977820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:37:15.335 [2024-11-20 13:57:22.977854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.626 ms 00:37:15.335 [2024-11-20 13:57:22.977882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.335 [2024-11-20 13:57:22.978005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.335 [2024-11-20 13:57:22.978038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:37:15.335 [2024-11-20 13:57:22.978070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:37:15.335 [2024-11-20 13:57:22.978098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.335 [2024-11-20 13:57:22.978277] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:37:15.335 [2024-11-20 13:57:22.978310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:37:15.335 [2024-11-20 13:57:22.978344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:15.335 [2024-11-20 13:57:22.978374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:37:15.335 [2024-11-20 13:57:22.978435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:37:15.335 [2024-11-20 13:57:22.978496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:37:15.335 [2024-11-20 13:57:22.978526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:15.335 [2024-11-20 13:57:22.978587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:37:15.335 [2024-11-20 13:57:22.978612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:37:15.335 [2024-11-20 13:57:22.978642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:37:15.335 [2024-11-20 13:57:22.978672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:37:15.335 [2024-11-20 13:57:22.978705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:37:15.335 [2024-11-20 13:57:22.978740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978775] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:37:15.335 [2024-11-20 13:57:22.978801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:37:15.335 [2024-11-20 13:57:22.978831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:37:15.335 [2024-11-20 13:57:22.978892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:37:15.335 [2024-11-20 13:57:22.978924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:15.335 [2024-11-20 13:57:22.978955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:37:15.335 [2024-11-20 13:57:22.978985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:15.335 [2024-11-20 13:57:22.979043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:37:15.335 [2024-11-20 13:57:22.979078] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:15.335 [2024-11-20 13:57:22.979137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:37:15.335 [2024-11-20 13:57:22.979165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:37:15.335 [2024-11-20 13:57:22.979226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:37:15.335 [2024-11-20 13:57:22.979260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:15.335 [2024-11-20 13:57:22.979319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:37:15.335 [2024-11-20 13:57:22.979372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:37:15.335 [2024-11-20 13:57:22.979414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:37:15.335 [2024-11-20 13:57:22.979441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:37:15.335 [2024-11-20 13:57:22.979470] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:37:15.335 [2024-11-20 13:57:22.979500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:37:15.335 [2024-11-20 13:57:22.979558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:37:15.335 [2024-11-20 13:57:22.979599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979627] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:37:15.335 [2024-11-20 13:57:22.979658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:37:15.335 [2024-11-20 13:57:22.979688] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:37:15.335 [2024-11-20 13:57:22.979728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:37:15.335 [2024-11-20 13:57:22.979762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:37:15.335 [2024-11-20 13:57:22.979795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:37:15.335 [2024-11-20 13:57:22.979821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:37:15.335 [2024-11-20 13:57:22.979855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:37:15.335 [2024-11-20 13:57:22.979884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:37:15.335 [2024-11-20 13:57:22.979915] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:37:15.335 [2024-11-20 13:57:22.979947] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:37:15.335 [2024-11-20 13:57:22.979992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:15.335 [2024-11-20 13:57:22.980046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:37:15.335 [2024-11-20 13:57:22.980093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:37:15.335 [2024-11-20 13:57:22.980131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:37:15.336 [2024-11-20 13:57:22.980174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:37:15.336 [2024-11-20 13:57:22.980213] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:37:15.336 [2024-11-20 13:57:22.980269] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:37:15.336 [2024-11-20 13:57:22.980320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:37:15.336 [2024-11-20 13:57:22.980366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:37:15.336 [2024-11-20 13:57:22.980404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:37:15.336 [2024-11-20 13:57:22.980462] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:37:15.336 [2024-11-20 13:57:22.980676] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:37:15.336 [2024-11-20 13:57:22.980732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:37:15.336 [2024-11-20 13:57:22.980822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:37:15.336 [2024-11-20 13:57:22.980869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:37:15.336 [2024-11-20 13:57:22.980913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:37:15.336 [2024-11-20 13:57:22.980956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:15.336 [2024-11-20 13:57:22.980987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:37:15.336 [2024-11-20 13:57:22.981014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.740 ms 00:37:15.336 [2024-11-20 13:57:22.981054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:15.336 [2024-11-20 13:57:22.981239] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:37:15.336 [2024-11-20 13:57:22.981296] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:37:19.522 [2024-11-20 13:57:27.192744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.522 [2024-11-20 13:57:27.192920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:37:19.522 [2024-11-20 13:57:27.192980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4219.614 ms 00:37:19.522 [2024-11-20 13:57:27.193009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.522 [2024-11-20 13:57:27.231979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.522 [2024-11-20 13:57:27.232122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:19.522 [2024-11-20 13:57:27.232158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.591 ms 00:37:19.522 [2024-11-20 13:57:27.232182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.522 [2024-11-20 13:57:27.232403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.522 [2024-11-20 13:57:27.232449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:37:19.522 [2024-11-20 13:57:27.232490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:37:19.522 [2024-11-20 13:57:27.232529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.296250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.296387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:19.781 [2024-11-20 13:57:27.296425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.736 ms 00:37:19.781 [2024-11-20 13:57:27.296450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.296544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.296581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:19.781 [2024-11-20 13:57:27.296621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:19.781 [2024-11-20 13:57:27.296652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.297221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.297283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:19.781 [2024-11-20 13:57:27.297316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:37:19.781 [2024-11-20 13:57:27.297354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.297530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.297573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:19.781 [2024-11-20 13:57:27.297603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:37:19.781 [2024-11-20 13:57:27.297639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.319790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.319884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:19.781 [2024-11-20 13:57:27.319899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.127 ms 00:37:19.781 [2024-11-20 13:57:27.319910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.332730] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:37:19.781 [2024-11-20 13:57:27.349209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.349291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:37:19.781 [2024-11-20 13:57:27.349308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.177 ms 00:37:19.781 [2024-11-20 13:57:27.349321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.438101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.438169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:37:19.781 [2024-11-20 13:57:27.438189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.852 ms 00:37:19.781 [2024-11-20 13:57:27.438198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.438471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.438486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:37:19.781 [2024-11-20 13:57:27.438501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:37:19.781 [2024-11-20 13:57:27.438510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:19.781 [2024-11-20 13:57:27.475680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:19.781 [2024-11-20 13:57:27.475836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:37:19.781 [2024-11-20 13:57:27.475857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.136 ms 00:37:19.781 [2024-11-20 13:57:27.475867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.511293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.511400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:37:20.039 [2024-11-20 13:57:27.511419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.413 ms 00:37:20.039 [2024-11-20 13:57:27.511428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.512239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.512261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:37:20.039 [2024-11-20 13:57:27.512275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:37:20.039 [2024-11-20 13:57:27.512283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.614546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.614606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:37:20.039 [2024-11-20 13:57:27.614628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.363 ms 00:37:20.039 [2024-11-20 13:57:27.614651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.652956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.653012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:37:20.039 [2024-11-20 13:57:27.653029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.192 ms 00:37:20.039 [2024-11-20 13:57:27.653039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.689718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.689778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:37:20.039 [2024-11-20 13:57:27.689794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.678 ms 00:37:20.039 [2024-11-20 13:57:27.689801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.039 [2024-11-20 13:57:27.727235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.039 [2024-11-20 13:57:27.727290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:37:20.039 [2024-11-20 13:57:27.727307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.424 ms 00:37:20.039 [2024-11-20 13:57:27.727315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.040 [2024-11-20 13:57:27.727399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.040 [2024-11-20 13:57:27.727410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:37:20.040 [2024-11-20 13:57:27.727428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:37:20.040 [2024-11-20 13:57:27.727437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.040 [2024-11-20 13:57:27.727641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:20.040 [2024-11-20 13:57:27.727655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:37:20.040 [2024-11-20 13:57:27.727668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:37:20.040 [2024-11-20 13:57:27.727678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:20.040 [2024-11-20 13:57:27.729227] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4777.796 ms, result 0 00:37:20.040 { 00:37:20.040 "name": "ftl0", 00:37:20.040 "uuid": "c2a45e9c-a48e-4efb-a35c-1f5ad93e5ae6" 00:37:20.040 } 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:20.298 13:57:27 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:37:20.557 [ 00:37:20.557 { 00:37:20.557 "name": "ftl0", 00:37:20.557 "aliases": [ 00:37:20.557 "c2a45e9c-a48e-4efb-a35c-1f5ad93e5ae6" 00:37:20.557 ], 00:37:20.557 "product_name": "FTL disk", 00:37:20.557 "block_size": 4096, 00:37:20.557 "num_blocks": 20971520, 00:37:20.557 "uuid": "c2a45e9c-a48e-4efb-a35c-1f5ad93e5ae6", 00:37:20.557 "assigned_rate_limits": { 00:37:20.557 "rw_ios_per_sec": 0, 00:37:20.557 "rw_mbytes_per_sec": 0, 00:37:20.557 "r_mbytes_per_sec": 0, 00:37:20.557 "w_mbytes_per_sec": 0 00:37:20.557 }, 00:37:20.557 "claimed": false, 00:37:20.557 "zoned": false, 00:37:20.557 "supported_io_types": { 00:37:20.557 "read": true, 00:37:20.557 "write": true, 00:37:20.557 "unmap": true, 00:37:20.557 "flush": true, 00:37:20.557 "reset": false, 00:37:20.557 "nvme_admin": false, 00:37:20.557 "nvme_io": false, 00:37:20.557 "nvme_io_md": false, 00:37:20.557 "write_zeroes": true, 00:37:20.557 "zcopy": false, 00:37:20.557 "get_zone_info": false, 00:37:20.557 "zone_management": false, 00:37:20.557 "zone_append": false, 00:37:20.557 "compare": false, 00:37:20.557 "compare_and_write": false, 00:37:20.557 "abort": false, 00:37:20.557 "seek_hole": false, 00:37:20.557 "seek_data": false, 00:37:20.557 "copy": false, 00:37:20.557 "nvme_iov_md": false 00:37:20.557 }, 00:37:20.557 "driver_specific": { 00:37:20.557 "ftl": { 00:37:20.557 "base_bdev": "95632865-bcde-4b67-8d37-235b4ea50faa", 00:37:20.557 "cache": "nvc0n1p0" 00:37:20.557 } 00:37:20.557 } 00:37:20.557 } 00:37:20.557 ] 00:37:20.557 13:57:28 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:37:20.557 13:57:28 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:37:20.557 13:57:28 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:37:20.816 13:57:28 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:37:20.816 13:57:28 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:37:21.075 [2024-11-20 13:57:28.600680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.600848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:37:21.075 [2024-11-20 13:57:28.600889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:37:21.075 [2024-11-20 13:57:28.600915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.601030] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:37:21.075 [2024-11-20 13:57:28.605371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.605450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:37:21.075 [2024-11-20 13:57:28.605481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.274 ms 00:37:21.075 [2024-11-20 13:57:28.605504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.606266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.606324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:37:21.075 [2024-11-20 13:57:28.606363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:37:21.075 [2024-11-20 13:57:28.606395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.608878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.608929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:37:21.075 [2024-11-20 13:57:28.608956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.444 ms 00:37:21.075 [2024-11-20 13:57:28.608977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.613922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.613982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:37:21.075 [2024-11-20 13:57:28.613998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.899 ms 00:37:21.075 [2024-11-20 13:57:28.614006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.649632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.649670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:37:21.075 [2024-11-20 13:57:28.649684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.549 ms 00:37:21.075 [2024-11-20 13:57:28.649692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.672093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.672131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:37:21.075 [2024-11-20 13:57:28.672155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.318 ms 00:37:21.075 [2024-11-20 13:57:28.672163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.672502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.672517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:37:21.075 [2024-11-20 13:57:28.672529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:37:21.075 [2024-11-20 13:57:28.672536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.708795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.708829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:37:21.075 [2024-11-20 13:57:28.708843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.270 ms 00:37:21.075 [2024-11-20 13:57:28.708850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.743739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.743772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:37:21.075 [2024-11-20 13:57:28.743785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.885 ms 00:37:21.075 [2024-11-20 13:57:28.743792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.075 [2024-11-20 13:57:28.780880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.075 [2024-11-20 13:57:28.780917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:37:21.075 [2024-11-20 13:57:28.780932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.078 ms 00:37:21.075 [2024-11-20 13:57:28.780940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.335 [2024-11-20 13:57:28.819199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.335 [2024-11-20 13:57:28.819247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:37:21.335 [2024-11-20 13:57:28.819261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.130 ms 00:37:21.335 [2024-11-20 13:57:28.819269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.335 [2024-11-20 13:57:28.819350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:37:21.335 [2024-11-20 13:57:28.819368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:37:21.335 [2024-11-20 13:57:28.819887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.819995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:37:21.336 [2024-11-20 13:57:28.820581] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:37:21.336 [2024-11-20 13:57:28.820592] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c2a45e9c-a48e-4efb-a35c-1f5ad93e5ae6 00:37:21.336 [2024-11-20 13:57:28.820602] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:37:21.336 [2024-11-20 13:57:28.820614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:37:21.336 [2024-11-20 13:57:28.820623] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:37:21.336 [2024-11-20 13:57:28.820636] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:37:21.336 [2024-11-20 13:57:28.820646] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:37:21.336 [2024-11-20 13:57:28.820657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:37:21.336 [2024-11-20 13:57:28.820665] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:37:21.336 [2024-11-20 13:57:28.820675] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:37:21.336 [2024-11-20 13:57:28.820682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:37:21.336 [2024-11-20 13:57:28.820692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.336 [2024-11-20 13:57:28.820701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:37:21.336 [2024-11-20 13:57:28.820712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.366 ms 00:37:21.336 [2024-11-20 13:57:28.820735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.840769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.337 [2024-11-20 13:57:28.840896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:37:21.337 [2024-11-20 13:57:28.840914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.957 ms 00:37:21.337 [2024-11-20 13:57:28.840922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.841449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:37:21.337 [2024-11-20 13:57:28.841466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:37:21.337 [2024-11-20 13:57:28.841478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:37:21.337 [2024-11-20 13:57:28.841486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.911328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.337 [2024-11-20 13:57:28.911471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:37:21.337 [2024-11-20 13:57:28.911491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.337 [2024-11-20 13:57:28.911500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.911627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.337 [2024-11-20 13:57:28.911637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:37:21.337 [2024-11-20 13:57:28.911648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.337 [2024-11-20 13:57:28.911655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.911832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.337 [2024-11-20 13:57:28.911851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:37:21.337 [2024-11-20 13:57:28.911863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.337 [2024-11-20 13:57:28.911872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:28.911933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.337 [2024-11-20 13:57:28.911943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:37:21.337 [2024-11-20 13:57:28.911953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.337 [2024-11-20 13:57:28.911961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.337 [2024-11-20 13:57:29.044021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.337 [2024-11-20 13:57:29.044091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:37:21.337 [2024-11-20 13:57:29.044108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.337 [2024-11-20 13:57:29.044117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.602 [2024-11-20 13:57:29.146511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.602 [2024-11-20 13:57:29.146579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:37:21.602 [2024-11-20 13:57:29.146595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.602 [2024-11-20 13:57:29.146604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.602 [2024-11-20 13:57:29.146785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.602 [2024-11-20 13:57:29.146798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:37:21.602 [2024-11-20 13:57:29.146814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.602 [2024-11-20 13:57:29.146822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.602 [2024-11-20 13:57:29.146938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.602 [2024-11-20 13:57:29.146948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:37:21.602 [2024-11-20 13:57:29.146960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.602 [2024-11-20 13:57:29.146968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.602 [2024-11-20 13:57:29.147115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.602 [2024-11-20 13:57:29.147134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:37:21.602 [2024-11-20 13:57:29.147146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.602 [2024-11-20 13:57:29.147157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.602 [2024-11-20 13:57:29.147256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.602 [2024-11-20 13:57:29.147279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:37:21.602 [2024-11-20 13:57:29.147290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.602 [2024-11-20 13:57:29.147299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.603 [2024-11-20 13:57:29.147368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.603 [2024-11-20 13:57:29.147378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:37:21.603 [2024-11-20 13:57:29.147388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.603 [2024-11-20 13:57:29.147396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.603 [2024-11-20 13:57:29.147480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:37:21.603 [2024-11-20 13:57:29.147491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:37:21.603 [2024-11-20 13:57:29.147501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:37:21.603 [2024-11-20 13:57:29.147508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:37:21.603 [2024-11-20 13:57:29.147835] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 548.098 ms, result 0 00:37:21.603 true 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 77720 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 77720 ']' 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 77720 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77720 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:21.603 killing process with pid 77720 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77720' 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 77720 00:37:21.603 13:57:29 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 77720 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:29.727 13:57:36 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:37:29.727 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:37:29.727 fio-3.35 00:37:29.727 Starting 1 thread 00:37:35.011 00:37:35.011 test: (groupid=0, jobs=1): err= 0: pid=77999: Wed Nov 20 13:57:42 2024 00:37:35.011 read: IOPS=995, BW=66.1MiB/s (69.3MB/s)(255MiB/3852msec) 00:37:35.011 slat (usec): min=4, max=109, avg= 7.83, stdev= 4.14 00:37:35.011 clat (usec): min=282, max=2061, avg=445.61, stdev=72.09 00:37:35.011 lat (usec): min=288, max=2067, avg=453.44, stdev=72.88 00:37:35.011 clat percentiles (usec): 00:37:35.011 | 1.00th=[ 314], 5.00th=[ 330], 10.00th=[ 371], 20.00th=[ 392], 00:37:35.012 | 30.00th=[ 408], 40.00th=[ 437], 50.00th=[ 449], 60.00th=[ 461], 00:37:35.012 | 70.00th=[ 474], 80.00th=[ 494], 90.00th=[ 523], 95.00th=[ 537], 00:37:35.012 | 99.00th=[ 611], 99.50th=[ 701], 99.90th=[ 848], 99.95th=[ 1516], 00:37:35.012 | 99.99th=[ 2057] 00:37:35.012 write: IOPS=1002, BW=66.5MiB/s (69.8MB/s)(256MiB/3848msec); 0 zone resets 00:37:35.012 slat (usec): min=15, max=185, avg=24.06, stdev= 8.34 00:37:35.012 clat (usec): min=333, max=5298, avg=511.48, stdev=115.45 00:37:35.012 lat (usec): min=351, max=5319, avg=535.54, stdev=116.85 00:37:35.012 clat percentiles (usec): 00:37:35.012 | 1.00th=[ 388], 5.00th=[ 404], 10.00th=[ 416], 20.00th=[ 441], 00:37:35.012 | 30.00th=[ 469], 40.00th=[ 482], 50.00th=[ 498], 60.00th=[ 523], 00:37:35.012 | 70.00th=[ 537], 80.00th=[ 553], 90.00th=[ 594], 95.00th=[ 627], 00:37:35.012 | 99.00th=[ 873], 99.50th=[ 922], 99.90th=[ 1074], 99.95th=[ 1123], 00:37:35.012 | 99.99th=[ 5276] 00:37:35.012 bw ( KiB/s): min=61064, max=71944, per=100.00%, avg=68349.71, stdev=3926.65, samples=7 00:37:35.012 iops : min= 898, max= 1058, avg=1005.14, stdev=57.74, samples=7 00:37:35.012 lat (usec) : 500=66.28%, 750=32.24%, 1000=1.37% 00:37:35.012 lat (msec) : 2=0.09%, 4=0.01%, 10=0.01% 00:37:35.012 cpu : usr=99.22%, sys=0.08%, ctx=7, majf=0, minf=1169 00:37:35.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:35.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:35.012 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:35.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:37:35.012 00:37:35.012 Run status group 0 (all jobs): 00:37:35.012 READ: bw=66.1MiB/s (69.3MB/s), 66.1MiB/s-66.1MiB/s (69.3MB/s-69.3MB/s), io=255MiB (267MB), run=3852-3852msec 00:37:35.012 WRITE: bw=66.5MiB/s (69.8MB/s), 66.5MiB/s-66.5MiB/s (69.8MB/s-69.8MB/s), io=256MiB (269MB), run=3848-3848msec 00:37:36.916 ----------------------------------------------------- 00:37:36.916 Suppressions used: 00:37:36.916 count bytes template 00:37:36.916 1 5 /usr/src/fio/parse.c 00:37:36.916 1 8 libtcmalloc_minimal.so 00:37:36.916 1 904 libcrypto.so 00:37:36.916 ----------------------------------------------------- 00:37:36.916 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:36.916 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:37:36.917 13:57:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:37:37.175 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:37:37.175 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:37:37.175 fio-3.35 00:37:37.175 Starting 2 threads 00:38:03.715 00:38:03.715 first_half: (groupid=0, jobs=1): err= 0: pid=78113: Wed Nov 20 13:58:10 2024 00:38:03.715 read: IOPS=2705, BW=10.6MiB/s (11.1MB/s)(256MiB/24200msec) 00:38:03.715 slat (nsec): min=3705, max=56535, avg=7179.48, stdev=2343.37 00:38:03.715 clat (usec): min=803, max=338770, avg=39443.43, stdev=26398.43 00:38:03.715 lat (usec): min=808, max=338785, avg=39450.61, stdev=26398.86 00:38:03.715 clat percentiles (msec): 00:38:03.715 | 1.00th=[ 9], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:38:03.715 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:38:03.715 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 40], 95.00th=[ 80], 00:38:03.715 | 99.00th=[ 178], 99.50th=[ 186], 99.90th=[ 253], 99.95th=[ 300], 00:38:03.715 | 99.99th=[ 330] 00:38:03.715 write: IOPS=2711, BW=10.6MiB/s (11.1MB/s)(256MiB/24170msec); 0 zone resets 00:38:03.715 slat (usec): min=4, max=456, avg= 8.84, stdev= 5.45 00:38:03.715 clat (usec): min=304, max=43206, avg=7825.51, stdev=7201.26 00:38:03.715 lat (usec): min=318, max=43212, avg=7834.35, stdev=7201.72 00:38:03.715 clat percentiles (usec): 00:38:03.715 | 1.00th=[ 1074], 5.00th=[ 1418], 10.00th=[ 1745], 20.00th=[ 3130], 00:38:03.715 | 30.00th=[ 4293], 40.00th=[ 5538], 50.00th=[ 6194], 60.00th=[ 6980], 00:38:03.715 | 70.00th=[ 7570], 80.00th=[ 9110], 90.00th=[15664], 95.00th=[21103], 00:38:03.715 | 99.00th=[38536], 99.50th=[40109], 99.90th=[41681], 99.95th=[41681], 00:38:03.715 | 99.99th=[42730] 00:38:03.715 bw ( KiB/s): min= 1680, max=46392, per=100.00%, avg=22647.13, stdev=13646.69, samples=23 00:38:03.715 iops : min= 420, max=11598, avg=5661.87, stdev=3411.68, samples=23 00:38:03.715 lat (usec) : 500=0.02%, 750=0.08%, 1000=0.26% 00:38:03.715 lat (msec) : 2=5.89%, 4=7.23%, 10=28.89%, 20=6.46%, 50=47.67% 00:38:03.715 lat (msec) : 100=1.46%, 250=1.99%, 500=0.05% 00:38:03.715 cpu : usr=99.29%, sys=0.12%, ctx=32, majf=0, minf=5540 00:38:03.715 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:38:03.715 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:03.715 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:03.715 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:03.715 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:03.715 second_half: (groupid=0, jobs=1): err= 0: pid=78114: Wed Nov 20 13:58:10 2024 00:38:03.715 read: IOPS=2726, BW=10.7MiB/s (11.2MB/s)(256MiB/24016msec) 00:38:03.715 slat (nsec): min=4040, max=37648, avg=7052.17, stdev=1997.54 00:38:03.715 clat (msec): min=11, max=218, avg=39.84, stdev=23.42 00:38:03.715 lat (msec): min=11, max=218, avg=39.84, stdev=23.42 00:38:03.715 clat percentiles (msec): 00:38:03.715 | 1.00th=[ 30], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 33], 00:38:03.715 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:38:03.715 | 70.00th=[ 37], 80.00th=[ 39], 90.00th=[ 43], 95.00th=[ 73], 00:38:03.715 | 99.00th=[ 171], 99.50th=[ 178], 99.90th=[ 197], 99.95th=[ 205], 00:38:03.715 | 99.99th=[ 213] 00:38:03.715 write: IOPS=2743, BW=10.7MiB/s (11.2MB/s)(256MiB/23889msec); 0 zone resets 00:38:03.715 slat (usec): min=4, max=550, avg= 8.69, stdev= 6.70 00:38:03.715 clat (usec): min=386, max=40804, avg=7073.27, stdev=4493.24 00:38:03.715 lat (usec): min=403, max=40811, avg=7081.96, stdev=4493.97 00:38:03.715 clat percentiles (usec): 00:38:03.715 | 1.00th=[ 1254], 5.00th=[ 2089], 10.00th=[ 2737], 20.00th=[ 3851], 00:38:03.715 | 30.00th=[ 4883], 40.00th=[ 5538], 50.00th=[ 6063], 60.00th=[ 6849], 00:38:03.716 | 70.00th=[ 7242], 80.00th=[ 8586], 90.00th=[13698], 95.00th=[16188], 00:38:03.716 | 99.00th=[22152], 99.50th=[28443], 99.90th=[36439], 99.95th=[39060], 00:38:03.716 | 99.99th=[40109] 00:38:03.716 bw ( KiB/s): min= 1656, max=46008, per=100.00%, avg=24867.81, stdev=13744.06, samples=21 00:38:03.716 iops : min= 414, max=11502, avg=6216.95, stdev=3436.01, samples=21 00:38:03.716 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.15% 00:38:03.716 lat (msec) : 2=2.05%, 4=8.96%, 10=30.35%, 20=7.81%, 50=47.05% 00:38:03.716 lat (msec) : 100=1.68%, 250=1.88% 00:38:03.716 cpu : usr=99.23%, sys=0.17%, ctx=32, majf=0, minf=5573 00:38:03.716 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:38:03.716 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:03.716 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:03.716 issued rwts: total=65490,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:03.716 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:03.716 00:38:03.716 Run status group 0 (all jobs): 00:38:03.716 READ: bw=21.1MiB/s (22.2MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=512MiB (536MB), run=24016-24200msec 00:38:03.716 WRITE: bw=21.2MiB/s (22.2MB/s), 10.6MiB/s-10.7MiB/s (11.1MB/s-11.2MB/s), io=512MiB (537MB), run=23889-24170msec 00:38:05.622 ----------------------------------------------------- 00:38:05.622 Suppressions used: 00:38:05.622 count bytes template 00:38:05.622 2 10 /usr/src/fio/parse.c 00:38:05.622 2 192 /usr/src/fio/iolog.c 00:38:05.622 1 8 libtcmalloc_minimal.so 00:38:05.622 1 904 libcrypto.so 00:38:05.622 ----------------------------------------------------- 00:38:05.622 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:38:05.882 13:58:13 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:38:06.142 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:38:06.142 fio-3.35 00:38:06.142 Starting 1 thread 00:38:24.242 00:38:24.242 test: (groupid=0, jobs=1): err= 0: pid=78433: Wed Nov 20 13:58:31 2024 00:38:24.242 read: IOPS=6128, BW=23.9MiB/s (25.1MB/s)(255MiB/10640msec) 00:38:24.242 slat (nsec): min=3385, max=47994, avg=9248.04, stdev=4301.24 00:38:24.242 clat (usec): min=844, max=41250, avg=20874.66, stdev=1147.90 00:38:24.242 lat (usec): min=848, max=41261, avg=20883.91, stdev=1147.78 00:38:24.242 clat percentiles (usec): 00:38:24.242 | 1.00th=[19530], 5.00th=[19792], 10.00th=[20055], 20.00th=[20317], 00:38:24.243 | 30.00th=[20579], 40.00th=[20579], 50.00th=[20841], 60.00th=[20841], 00:38:24.243 | 70.00th=[21103], 80.00th=[21365], 90.00th=[21627], 95.00th=[21627], 00:38:24.243 | 99.00th=[23987], 99.50th=[27395], 99.90th=[32375], 99.95th=[35914], 00:38:24.243 | 99.99th=[40109] 00:38:24.243 write: IOPS=10.4k, BW=40.6MiB/s (42.5MB/s)(256MiB/6309msec); 0 zone resets 00:38:24.243 slat (usec): min=4, max=827, avg=10.97, stdev= 9.28 00:38:24.243 clat (usec): min=784, max=67706, avg=12260.46, stdev=14671.10 00:38:24.243 lat (usec): min=793, max=67714, avg=12271.43, stdev=14671.09 00:38:24.243 clat percentiles (usec): 00:38:24.243 | 1.00th=[ 1254], 5.00th=[ 1516], 10.00th=[ 1713], 20.00th=[ 1942], 00:38:24.243 | 30.00th=[ 2180], 40.00th=[ 2606], 50.00th=[ 8160], 60.00th=[ 9896], 00:38:24.243 | 70.00th=[11076], 80.00th=[13435], 90.00th=[44303], 95.00th=[45876], 00:38:24.243 | 99.00th=[47973], 99.50th=[48497], 99.90th=[51119], 99.95th=[54789], 00:38:24.243 | 99.99th=[61604] 00:38:24.243 bw ( KiB/s): min=22216, max=55216, per=97.06%, avg=40329.85, stdev=8173.55, samples=13 00:38:24.243 iops : min= 5554, max=13804, avg=10082.46, stdev=2043.39, samples=13 00:38:24.243 lat (usec) : 1000=0.02% 00:38:24.243 lat (msec) : 2=11.40%, 4=9.45%, 10=9.71%, 20=15.78%, 50=53.56% 00:38:24.243 lat (msec) : 100=0.07% 00:38:24.243 cpu : usr=98.95%, sys=0.34%, ctx=29, majf=0, minf=5565 00:38:24.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:38:24.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:38:24.243 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:38:24.243 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:38:24.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:38:24.243 00:38:24.243 Run status group 0 (all jobs): 00:38:24.243 READ: bw=23.9MiB/s (25.1MB/s), 23.9MiB/s-23.9MiB/s (25.1MB/s-25.1MB/s), io=255MiB (267MB), run=10640-10640msec 00:38:24.243 WRITE: bw=40.6MiB/s (42.5MB/s), 40.6MiB/s-40.6MiB/s (42.5MB/s-42.5MB/s), io=256MiB (268MB), run=6309-6309msec 00:38:26.784 ----------------------------------------------------- 00:38:26.784 Suppressions used: 00:38:26.784 count bytes template 00:38:26.784 1 5 /usr/src/fio/parse.c 00:38:26.784 2 192 /usr/src/fio/iolog.c 00:38:26.784 1 8 libtcmalloc_minimal.so 00:38:26.784 1 904 libcrypto.so 00:38:26.784 ----------------------------------------------------- 00:38:26.784 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:26.784 Remove shared memory files 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58077 /dev/shm/spdk_tgt_trace.pid76610 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:26.784 13:58:34 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:38:26.784 ************************************ 00:38:26.784 END TEST ftl_fio_basic 00:38:26.784 ************************************ 00:38:26.784 00:38:26.784 real 1m15.930s 00:38:26.784 user 2m45.885s 00:38:26.785 sys 0m3.957s 00:38:26.785 13:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.785 13:58:34 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:38:26.785 13:58:34 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:38:26.785 13:58:34 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:26.785 13:58:34 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.785 13:58:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:26.785 ************************************ 00:38:26.785 START TEST ftl_bdevperf 00:38:26.785 ************************************ 00:38:26.785 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:38:26.785 * Looking for test storage... 00:38:26.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:26.785 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:26.785 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:38:26.785 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:27.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.045 --rc genhtml_branch_coverage=1 00:38:27.045 --rc genhtml_function_coverage=1 00:38:27.045 --rc genhtml_legend=1 00:38:27.045 --rc geninfo_all_blocks=1 00:38:27.045 --rc geninfo_unexecuted_blocks=1 00:38:27.045 00:38:27.045 ' 00:38:27.045 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:27.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.045 --rc genhtml_branch_coverage=1 00:38:27.045 --rc genhtml_function_coverage=1 00:38:27.045 --rc genhtml_legend=1 00:38:27.045 --rc geninfo_all_blocks=1 00:38:27.045 --rc geninfo_unexecuted_blocks=1 00:38:27.045 00:38:27.045 ' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:27.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.046 --rc genhtml_branch_coverage=1 00:38:27.046 --rc genhtml_function_coverage=1 00:38:27.046 --rc genhtml_legend=1 00:38:27.046 --rc geninfo_all_blocks=1 00:38:27.046 --rc geninfo_unexecuted_blocks=1 00:38:27.046 00:38:27.046 ' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:27.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.046 --rc genhtml_branch_coverage=1 00:38:27.046 --rc genhtml_function_coverage=1 00:38:27.046 --rc genhtml_legend=1 00:38:27.046 --rc geninfo_all_blocks=1 00:38:27.046 --rc geninfo_unexecuted_blocks=1 00:38:27.046 00:38:27.046 ' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=78710 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 78710 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 78710 ']' 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:27.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:27.046 13:58:34 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:27.046 [2024-11-20 13:58:34.681855] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:38:27.046 [2024-11-20 13:58:34.682067] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78710 ] 00:38:27.306 [2024-11-20 13:58:34.856005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:27.306 [2024-11-20 13:58:34.966180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:38:27.876 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:28.137 13:58:35 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:28.397 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:28.397 { 00:38:28.397 "name": "nvme0n1", 00:38:28.397 "aliases": [ 00:38:28.397 "d4fa43c8-c469-42e2-a01a-2d527caed09b" 00:38:28.397 ], 00:38:28.397 "product_name": "NVMe disk", 00:38:28.397 "block_size": 4096, 00:38:28.397 "num_blocks": 1310720, 00:38:28.397 "uuid": "d4fa43c8-c469-42e2-a01a-2d527caed09b", 00:38:28.397 "numa_id": -1, 00:38:28.397 "assigned_rate_limits": { 00:38:28.397 "rw_ios_per_sec": 0, 00:38:28.397 "rw_mbytes_per_sec": 0, 00:38:28.397 "r_mbytes_per_sec": 0, 00:38:28.397 "w_mbytes_per_sec": 0 00:38:28.397 }, 00:38:28.397 "claimed": true, 00:38:28.397 "claim_type": "read_many_write_one", 00:38:28.397 "zoned": false, 00:38:28.397 "supported_io_types": { 00:38:28.397 "read": true, 00:38:28.397 "write": true, 00:38:28.397 "unmap": true, 00:38:28.397 "flush": true, 00:38:28.397 "reset": true, 00:38:28.397 "nvme_admin": true, 00:38:28.397 "nvme_io": true, 00:38:28.397 "nvme_io_md": false, 00:38:28.397 "write_zeroes": true, 00:38:28.397 "zcopy": false, 00:38:28.397 "get_zone_info": false, 00:38:28.397 "zone_management": false, 00:38:28.397 "zone_append": false, 00:38:28.397 "compare": true, 00:38:28.398 "compare_and_write": false, 00:38:28.398 "abort": true, 00:38:28.398 "seek_hole": false, 00:38:28.398 "seek_data": false, 00:38:28.398 "copy": true, 00:38:28.398 "nvme_iov_md": false 00:38:28.398 }, 00:38:28.398 "driver_specific": { 00:38:28.398 "nvme": [ 00:38:28.398 { 00:38:28.398 "pci_address": "0000:00:11.0", 00:38:28.398 "trid": { 00:38:28.398 "trtype": "PCIe", 00:38:28.398 "traddr": "0000:00:11.0" 00:38:28.398 }, 00:38:28.398 "ctrlr_data": { 00:38:28.398 "cntlid": 0, 00:38:28.398 "vendor_id": "0x1b36", 00:38:28.398 "model_number": "QEMU NVMe Ctrl", 00:38:28.398 "serial_number": "12341", 00:38:28.398 "firmware_revision": "8.0.0", 00:38:28.398 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:28.398 "oacs": { 00:38:28.398 "security": 0, 00:38:28.398 "format": 1, 00:38:28.398 "firmware": 0, 00:38:28.398 "ns_manage": 1 00:38:28.398 }, 00:38:28.398 "multi_ctrlr": false, 00:38:28.398 "ana_reporting": false 00:38:28.398 }, 00:38:28.398 "vs": { 00:38:28.398 "nvme_version": "1.4" 00:38:28.398 }, 00:38:28.398 "ns_data": { 00:38:28.398 "id": 1, 00:38:28.398 "can_share": false 00:38:28.398 } 00:38:28.398 } 00:38:28.398 ], 00:38:28.398 "mp_policy": "active_passive" 00:38:28.398 } 00:38:28.398 } 00:38:28.398 ]' 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:28.398 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:28.658 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=e3d7cb4a-423e-460f-9629-586fb4a8c35d 00:38:28.658 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:38:28.658 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e3d7cb4a-423e-460f-9629-586fb4a8c35d 00:38:28.918 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:29.178 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=f097a777-60ec-420a-97ba-2961d15d31f9 00:38:29.178 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f097a777-60ec-420a-97ba-2961d15d31f9 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:29.438 13:58:36 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.438 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:29.438 { 00:38:29.438 "name": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:29.438 "aliases": [ 00:38:29.438 "lvs/nvme0n1p0" 00:38:29.438 ], 00:38:29.438 "product_name": "Logical Volume", 00:38:29.438 "block_size": 4096, 00:38:29.438 "num_blocks": 26476544, 00:38:29.438 "uuid": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:29.438 "assigned_rate_limits": { 00:38:29.438 "rw_ios_per_sec": 0, 00:38:29.438 "rw_mbytes_per_sec": 0, 00:38:29.438 "r_mbytes_per_sec": 0, 00:38:29.438 "w_mbytes_per_sec": 0 00:38:29.438 }, 00:38:29.438 "claimed": false, 00:38:29.438 "zoned": false, 00:38:29.438 "supported_io_types": { 00:38:29.438 "read": true, 00:38:29.438 "write": true, 00:38:29.438 "unmap": true, 00:38:29.438 "flush": false, 00:38:29.438 "reset": true, 00:38:29.438 "nvme_admin": false, 00:38:29.438 "nvme_io": false, 00:38:29.438 "nvme_io_md": false, 00:38:29.438 "write_zeroes": true, 00:38:29.438 "zcopy": false, 00:38:29.438 "get_zone_info": false, 00:38:29.438 "zone_management": false, 00:38:29.438 "zone_append": false, 00:38:29.438 "compare": false, 00:38:29.438 "compare_and_write": false, 00:38:29.438 "abort": false, 00:38:29.438 "seek_hole": true, 00:38:29.438 "seek_data": true, 00:38:29.438 "copy": false, 00:38:29.438 "nvme_iov_md": false 00:38:29.438 }, 00:38:29.438 "driver_specific": { 00:38:29.438 "lvol": { 00:38:29.438 "lvol_store_uuid": "f097a777-60ec-420a-97ba-2961d15d31f9", 00:38:29.438 "base_bdev": "nvme0n1", 00:38:29.438 "thin_provision": true, 00:38:29.438 "num_allocated_clusters": 0, 00:38:29.438 "snapshot": false, 00:38:29.438 "clone": false, 00:38:29.438 "esnap_clone": false 00:38:29.438 } 00:38:29.438 } 00:38:29.438 } 00:38:29.438 ]' 00:38:29.438 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:38:29.698 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:29.958 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:30.219 { 00:38:30.219 "name": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:30.219 "aliases": [ 00:38:30.219 "lvs/nvme0n1p0" 00:38:30.219 ], 00:38:30.219 "product_name": "Logical Volume", 00:38:30.219 "block_size": 4096, 00:38:30.219 "num_blocks": 26476544, 00:38:30.219 "uuid": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:30.219 "assigned_rate_limits": { 00:38:30.219 "rw_ios_per_sec": 0, 00:38:30.219 "rw_mbytes_per_sec": 0, 00:38:30.219 "r_mbytes_per_sec": 0, 00:38:30.219 "w_mbytes_per_sec": 0 00:38:30.219 }, 00:38:30.219 "claimed": false, 00:38:30.219 "zoned": false, 00:38:30.219 "supported_io_types": { 00:38:30.219 "read": true, 00:38:30.219 "write": true, 00:38:30.219 "unmap": true, 00:38:30.219 "flush": false, 00:38:30.219 "reset": true, 00:38:30.219 "nvme_admin": false, 00:38:30.219 "nvme_io": false, 00:38:30.219 "nvme_io_md": false, 00:38:30.219 "write_zeroes": true, 00:38:30.219 "zcopy": false, 00:38:30.219 "get_zone_info": false, 00:38:30.219 "zone_management": false, 00:38:30.219 "zone_append": false, 00:38:30.219 "compare": false, 00:38:30.219 "compare_and_write": false, 00:38:30.219 "abort": false, 00:38:30.219 "seek_hole": true, 00:38:30.219 "seek_data": true, 00:38:30.219 "copy": false, 00:38:30.219 "nvme_iov_md": false 00:38:30.219 }, 00:38:30.219 "driver_specific": { 00:38:30.219 "lvol": { 00:38:30.219 "lvol_store_uuid": "f097a777-60ec-420a-97ba-2961d15d31f9", 00:38:30.219 "base_bdev": "nvme0n1", 00:38:30.219 "thin_provision": true, 00:38:30.219 "num_allocated_clusters": 0, 00:38:30.219 "snapshot": false, 00:38:30.219 "clone": false, 00:38:30.219 "esnap_clone": false 00:38:30.219 } 00:38:30.219 } 00:38:30.219 } 00:38:30.219 ]' 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:38:30.219 13:58:37 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:30.480 13:58:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:38:30.480 13:58:37 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:30.480 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:30.480 13:58:37 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:30.480 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:38:30.480 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:38:30.480 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cfec003c-0dcf-44ec-98ae-90eee44dac0c 00:38:30.480 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:30.480 { 00:38:30.480 "name": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:30.480 "aliases": [ 00:38:30.480 "lvs/nvme0n1p0" 00:38:30.480 ], 00:38:30.480 "product_name": "Logical Volume", 00:38:30.480 "block_size": 4096, 00:38:30.480 "num_blocks": 26476544, 00:38:30.480 "uuid": "cfec003c-0dcf-44ec-98ae-90eee44dac0c", 00:38:30.480 "assigned_rate_limits": { 00:38:30.480 "rw_ios_per_sec": 0, 00:38:30.480 "rw_mbytes_per_sec": 0, 00:38:30.480 "r_mbytes_per_sec": 0, 00:38:30.480 "w_mbytes_per_sec": 0 00:38:30.480 }, 00:38:30.480 "claimed": false, 00:38:30.480 "zoned": false, 00:38:30.480 "supported_io_types": { 00:38:30.480 "read": true, 00:38:30.480 "write": true, 00:38:30.480 "unmap": true, 00:38:30.480 "flush": false, 00:38:30.480 "reset": true, 00:38:30.480 "nvme_admin": false, 00:38:30.480 "nvme_io": false, 00:38:30.480 "nvme_io_md": false, 00:38:30.480 "write_zeroes": true, 00:38:30.480 "zcopy": false, 00:38:30.480 "get_zone_info": false, 00:38:30.480 "zone_management": false, 00:38:30.480 "zone_append": false, 00:38:30.480 "compare": false, 00:38:30.480 "compare_and_write": false, 00:38:30.480 "abort": false, 00:38:30.480 "seek_hole": true, 00:38:30.480 "seek_data": true, 00:38:30.480 "copy": false, 00:38:30.480 "nvme_iov_md": false 00:38:30.480 }, 00:38:30.480 "driver_specific": { 00:38:30.480 "lvol": { 00:38:30.480 "lvol_store_uuid": "f097a777-60ec-420a-97ba-2961d15d31f9", 00:38:30.480 "base_bdev": "nvme0n1", 00:38:30.480 "thin_provision": true, 00:38:30.480 "num_allocated_clusters": 0, 00:38:30.480 "snapshot": false, 00:38:30.480 "clone": false, 00:38:30.480 "esnap_clone": false 00:38:30.480 } 00:38:30.480 } 00:38:30.480 } 00:38:30.480 ]' 00:38:30.480 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:38:30.742 13:58:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cfec003c-0dcf-44ec-98ae-90eee44dac0c -c nvc0n1p0 --l2p_dram_limit 20 00:38:30.743 [2024-11-20 13:58:38.434863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.434912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:30.743 [2024-11-20 13:58:38.434943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:38:30.743 [2024-11-20 13:58:38.434954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.435024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.435039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:30.743 [2024-11-20 13:58:38.435048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:38:30.743 [2024-11-20 13:58:38.435058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.435077] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:30.743 [2024-11-20 13:58:38.436155] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:30.743 [2024-11-20 13:58:38.436183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.436194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:30.743 [2024-11-20 13:58:38.436203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.115 ms 00:38:30.743 [2024-11-20 13:58:38.436214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.436283] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 523ac80f-4f75-4295-9408-9932a7ca89a8 00:38:30.743 [2024-11-20 13:58:38.437708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.437750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:30.743 [2024-11-20 13:58:38.437763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:38:30.743 [2024-11-20 13:58:38.437773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.445250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.445283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:30.743 [2024-11-20 13:58:38.445296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.448 ms 00:38:30.743 [2024-11-20 13:58:38.445304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.445412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.445426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:30.743 [2024-11-20 13:58:38.445440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:38:30.743 [2024-11-20 13:58:38.445448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.445521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.445530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:30.743 [2024-11-20 13:58:38.445541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:38:30.743 [2024-11-20 13:58:38.445548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.445571] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:30.743 [2024-11-20 13:58:38.450516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.450548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:30.743 [2024-11-20 13:58:38.450557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.965 ms 00:38:30.743 [2024-11-20 13:58:38.450587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.450615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.450626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:30.743 [2024-11-20 13:58:38.450634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:30.743 [2024-11-20 13:58:38.450643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.450670] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:30.743 [2024-11-20 13:58:38.450814] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:30.743 [2024-11-20 13:58:38.450844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:30.743 [2024-11-20 13:58:38.450858] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:30.743 [2024-11-20 13:58:38.450868] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:30.743 [2024-11-20 13:58:38.450879] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:30.743 [2024-11-20 13:58:38.450887] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:38:30.743 [2024-11-20 13:58:38.450897] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:30.743 [2024-11-20 13:58:38.450904] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:30.743 [2024-11-20 13:58:38.450914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:30.743 [2024-11-20 13:58:38.450922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.450936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:30.743 [2024-11-20 13:58:38.450944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:38:30.743 [2024-11-20 13:58:38.450954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.451036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.743 [2024-11-20 13:58:38.451054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:30.743 [2024-11-20 13:58:38.451062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:38:30.743 [2024-11-20 13:58:38.451072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.743 [2024-11-20 13:58:38.451147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:30.743 [2024-11-20 13:58:38.451159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:30.743 [2024-11-20 13:58:38.451169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:30.743 [2024-11-20 13:58:38.451178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:30.743 [2024-11-20 13:58:38.451196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:38:30.743 [2024-11-20 13:58:38.451211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:30.743 [2024-11-20 13:58:38.451218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:30.743 [2024-11-20 13:58:38.451233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:30.743 [2024-11-20 13:58:38.451241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:38:30.743 [2024-11-20 13:58:38.451248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:30.743 [2024-11-20 13:58:38.451272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:30.743 [2024-11-20 13:58:38.451279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:38:30.743 [2024-11-20 13:58:38.451291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:30.743 [2024-11-20 13:58:38.451307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:38:30.743 [2024-11-20 13:58:38.451313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:30.743 [2024-11-20 13:58:38.451328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:30.743 [2024-11-20 13:58:38.451343] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:30.743 [2024-11-20 13:58:38.451351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:38:30.743 [2024-11-20 13:58:38.451357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:30.743 [2024-11-20 13:58:38.451365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:30.744 [2024-11-20 13:58:38.451372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:30.744 [2024-11-20 13:58:38.451386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:30.744 [2024-11-20 13:58:38.451394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:30.744 [2024-11-20 13:58:38.451410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:30.744 [2024-11-20 13:58:38.451416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:30.744 [2024-11-20 13:58:38.451431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:30.744 [2024-11-20 13:58:38.451439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:38:30.744 [2024-11-20 13:58:38.451445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:30.744 [2024-11-20 13:58:38.451454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:30.744 [2024-11-20 13:58:38.451460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:38:30.744 [2024-11-20 13:58:38.451469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451475] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:30.744 [2024-11-20 13:58:38.451483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:38:30.744 [2024-11-20 13:58:38.451489] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451498] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:30.744 [2024-11-20 13:58:38.451507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:30.744 [2024-11-20 13:58:38.451516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:30.744 [2024-11-20 13:58:38.451523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:30.744 [2024-11-20 13:58:38.451535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:30.744 [2024-11-20 13:58:38.451541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:30.744 [2024-11-20 13:58:38.451550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:30.744 [2024-11-20 13:58:38.451557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:30.744 [2024-11-20 13:58:38.451566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:30.744 [2024-11-20 13:58:38.451573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:30.744 [2024-11-20 13:58:38.451595] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:30.744 [2024-11-20 13:58:38.451605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:38:30.744 [2024-11-20 13:58:38.451622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:38:30.744 [2024-11-20 13:58:38.451631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:38:30.744 [2024-11-20 13:58:38.451638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:38:30.744 [2024-11-20 13:58:38.451646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:38:30.744 [2024-11-20 13:58:38.451653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:38:30.744 [2024-11-20 13:58:38.451663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:38:30.744 [2024-11-20 13:58:38.451670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:38:30.744 [2024-11-20 13:58:38.451680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:38:30.744 [2024-11-20 13:58:38.451688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451705] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451731] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:38:30.744 [2024-11-20 13:58:38.451740] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:30.744 [2024-11-20 13:58:38.451748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:30.744 [2024-11-20 13:58:38.451764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:30.744 [2024-11-20 13:58:38.451773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:30.744 [2024-11-20 13:58:38.451781] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:30.744 [2024-11-20 13:58:38.451791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:30.744 [2024-11-20 13:58:38.451802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:30.744 [2024-11-20 13:58:38.451813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.694 ms 00:38:30.744 [2024-11-20 13:58:38.451821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:30.744 [2024-11-20 13:58:38.451863] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:30.744 [2024-11-20 13:58:38.451874] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:34.943 [2024-11-20 13:58:42.254346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.254414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:34.943 [2024-11-20 13:58:42.254434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3809.810 ms 00:38:34.943 [2024-11-20 13:58:42.254460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.290348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.290405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:34.943 [2024-11-20 13:58:42.290420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.609 ms 00:38:34.943 [2024-11-20 13:58:42.290444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.290615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.290627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:34.943 [2024-11-20 13:58:42.290640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:38:34.943 [2024-11-20 13:58:42.290647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.347041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.347092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:34.943 [2024-11-20 13:58:42.347108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.467 ms 00:38:34.943 [2024-11-20 13:58:42.347132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.347180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.347191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:34.943 [2024-11-20 13:58:42.347201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:38:34.943 [2024-11-20 13:58:42.347208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.347688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.347700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:34.943 [2024-11-20 13:58:42.347710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:38:34.943 [2024-11-20 13:58:42.347731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.347835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.347873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:34.943 [2024-11-20 13:58:42.347887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:38:34.943 [2024-11-20 13:58:42.347894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.943 [2024-11-20 13:58:42.364545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.943 [2024-11-20 13:58:42.364586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:34.943 [2024-11-20 13:58:42.364602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.661 ms 00:38:34.944 [2024-11-20 13:58:42.364626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.376248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:38:34.944 [2024-11-20 13:58:42.381946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.381985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:34.944 [2024-11-20 13:58:42.381997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.229 ms 00:38:34.944 [2024-11-20 13:58:42.382023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.477017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.477086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:34.944 [2024-11-20 13:58:42.477117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.135 ms 00:38:34.944 [2024-11-20 13:58:42.477127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.477312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.477328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:34.944 [2024-11-20 13:58:42.477338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:38:34.944 [2024-11-20 13:58:42.477347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.511208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.511250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:34.944 [2024-11-20 13:58:42.511261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.857 ms 00:38:34.944 [2024-11-20 13:58:42.511287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.544989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.545031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:34.944 [2024-11-20 13:58:42.545042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.731 ms 00:38:34.944 [2024-11-20 13:58:42.545051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.545784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.545806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:34.944 [2024-11-20 13:58:42.545815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.702 ms 00:38:34.944 [2024-11-20 13:58:42.545825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:34.944 [2024-11-20 13:58:42.643735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:34.944 [2024-11-20 13:58:42.643873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:34.944 [2024-11-20 13:58:42.643906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.054 ms 00:38:34.944 [2024-11-20 13:58:42.643917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.680838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:35.204 [2024-11-20 13:58:42.680894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:35.204 [2024-11-20 13:58:42.680911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.917 ms 00:38:35.204 [2024-11-20 13:58:42.680921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.714725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:35.204 [2024-11-20 13:58:42.714771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:35.204 [2024-11-20 13:58:42.714800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.832 ms 00:38:35.204 [2024-11-20 13:58:42.714810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.749646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:35.204 [2024-11-20 13:58:42.749747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:35.204 [2024-11-20 13:58:42.749764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.867 ms 00:38:35.204 [2024-11-20 13:58:42.749775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.749815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:35.204 [2024-11-20 13:58:42.749831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:35.204 [2024-11-20 13:58:42.749839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:35.204 [2024-11-20 13:58:42.749848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.749941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:35.204 [2024-11-20 13:58:42.749953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:35.204 [2024-11-20 13:58:42.749961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:38:35.204 [2024-11-20 13:58:42.749970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:35.204 [2024-11-20 13:58:42.751026] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4323.993 ms, result 0 00:38:35.204 { 00:38:35.204 "name": "ftl0", 00:38:35.204 "uuid": "523ac80f-4f75-4295-9408-9932a7ca89a8" 00:38:35.204 } 00:38:35.204 13:58:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:38:35.204 13:58:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:38:35.204 13:58:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:38:35.464 13:58:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:38:35.465 [2024-11-20 13:58:43.138811] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:35.465 I/O size of 69632 is greater than zero copy threshold (65536). 00:38:35.465 Zero copy mechanism will not be used. 00:38:35.465 Running I/O for 4 seconds... 00:38:37.789 1619.00 IOPS, 107.51 MiB/s [2024-11-20T13:58:46.447Z] 1655.00 IOPS, 109.90 MiB/s [2024-11-20T13:58:47.386Z] 1697.33 IOPS, 112.71 MiB/s [2024-11-20T13:58:47.386Z] 1721.75 IOPS, 114.33 MiB/s 00:38:39.667 Latency(us) 00:38:39.667 [2024-11-20T13:58:47.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:39.667 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:38:39.667 ftl0 : 4.00 1721.30 114.31 0.00 0.00 609.37 221.79 2246.54 00:38:39.667 [2024-11-20T13:58:47.386Z] =================================================================================================================== 00:38:39.667 [2024-11-20T13:58:47.386Z] Total : 1721.30 114.31 0.00 0.00 609.37 221.79 2246.54 00:38:39.667 [2024-11-20 13:58:47.141518] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:39.667 { 00:38:39.667 "results": [ 00:38:39.667 { 00:38:39.667 "job": "ftl0", 00:38:39.667 "core_mask": "0x1", 00:38:39.667 "workload": "randwrite", 00:38:39.667 "status": "finished", 00:38:39.667 "queue_depth": 1, 00:38:39.667 "io_size": 69632, 00:38:39.667 "runtime": 4.001622, 00:38:39.667 "iops": 1721.3020120341203, 00:38:39.667 "mibps": 114.3052117366408, 00:38:39.667 "io_failed": 0, 00:38:39.667 "io_timeout": 0, 00:38:39.667 "avg_latency_us": 609.3702762604669, 00:38:39.667 "min_latency_us": 221.79213973799128, 00:38:39.667 "max_latency_us": 2246.5397379912665 00:38:39.667 } 00:38:39.667 ], 00:38:39.667 "core_count": 1 00:38:39.667 } 00:38:39.667 13:58:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:38:39.667 [2024-11-20 13:58:47.262237] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:39.667 Running I/O for 4 seconds... 00:38:41.981 10739.00 IOPS, 41.95 MiB/s [2024-11-20T13:58:50.268Z] 10394.50 IOPS, 40.60 MiB/s [2024-11-20T13:58:51.646Z] 10383.67 IOPS, 40.56 MiB/s [2024-11-20T13:58:51.646Z] 10382.25 IOPS, 40.56 MiB/s 00:38:43.927 Latency(us) 00:38:43.927 [2024-11-20T13:58:51.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:43.927 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.927 ftl0 : 4.02 10372.44 40.52 0.00 0.00 12314.78 262.93 71431.38 00:38:43.927 [2024-11-20T13:58:51.646Z] =================================================================================================================== 00:38:43.927 [2024-11-20T13:58:51.646Z] Total : 10372.44 40.52 0.00 0.00 12314.78 0.00 71431.38 00:38:43.927 [2024-11-20 13:58:51.279811] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:43.927 { 00:38:43.927 "results": [ 00:38:43.927 { 00:38:43.927 "job": "ftl0", 00:38:43.927 "core_mask": "0x1", 00:38:43.927 "workload": "randwrite", 00:38:43.927 "status": "finished", 00:38:43.927 "queue_depth": 128, 00:38:43.927 "io_size": 4096, 00:38:43.927 "runtime": 4.015835, 00:38:43.927 "iops": 10372.438110629546, 00:38:43.927 "mibps": 40.517336369646664, 00:38:43.927 "io_failed": 0, 00:38:43.927 "io_timeout": 0, 00:38:43.927 "avg_latency_us": 12314.776147438777, 00:38:43.927 "min_latency_us": 262.9310043668122, 00:38:43.927 "max_latency_us": 71431.37816593886 00:38:43.927 } 00:38:43.927 ], 00:38:43.927 "core_count": 1 00:38:43.927 } 00:38:43.927 13:58:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:38:43.927 [2024-11-20 13:58:51.431180] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:38:43.927 Running I/O for 4 seconds... 00:38:45.803 8086.00 IOPS, 31.59 MiB/s [2024-11-20T13:58:54.460Z] 8102.50 IOPS, 31.65 MiB/s [2024-11-20T13:58:55.839Z] 8106.67 IOPS, 31.67 MiB/s [2024-11-20T13:58:55.839Z] 8091.25 IOPS, 31.61 MiB/s 00:38:48.120 Latency(us) 00:38:48.120 [2024-11-20T13:58:55.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.120 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:38:48.120 Verification LBA range: start 0x0 length 0x1400000 00:38:48.120 ftl0 : 4.01 8103.17 31.65 0.00 0.00 15747.33 279.03 17628.90 00:38:48.120 [2024-11-20T13:58:55.839Z] =================================================================================================================== 00:38:48.120 [2024-11-20T13:58:55.839Z] Total : 8103.17 31.65 0.00 0.00 15747.33 0.00 17628.90 00:38:48.120 [2024-11-20 13:58:55.451131] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:38:48.120 { 00:38:48.120 "results": [ 00:38:48.120 { 00:38:48.120 "job": "ftl0", 00:38:48.120 "core_mask": "0x1", 00:38:48.120 "workload": "verify", 00:38:48.120 "status": "finished", 00:38:48.120 "verify_range": { 00:38:48.120 "start": 0, 00:38:48.120 "length": 20971520 00:38:48.120 }, 00:38:48.120 "queue_depth": 128, 00:38:48.120 "io_size": 4096, 00:38:48.120 "runtime": 4.009417, 00:38:48.120 "iops": 8103.1731047182175, 00:38:48.120 "mibps": 31.653019940305537, 00:38:48.120 "io_failed": 0, 00:38:48.120 "io_timeout": 0, 00:38:48.120 "avg_latency_us": 15747.330668290686, 00:38:48.120 "min_latency_us": 279.0288209606987, 00:38:48.120 "max_latency_us": 17628.897816593886 00:38:48.120 } 00:38:48.120 ], 00:38:48.120 "core_count": 1 00:38:48.120 } 00:38:48.120 13:58:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:38:48.120 [2024-11-20 13:58:55.665328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.120 [2024-11-20 13:58:55.665472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:48.120 [2024-11-20 13:58:55.665491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:48.120 [2024-11-20 13:58:55.665501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.120 [2024-11-20 13:58:55.665528] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:48.120 [2024-11-20 13:58:55.669540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.120 [2024-11-20 13:58:55.669572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:48.120 [2024-11-20 13:58:55.669583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.001 ms 00:38:48.120 [2024-11-20 13:58:55.669592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.120 [2024-11-20 13:58:55.671699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.120 [2024-11-20 13:58:55.671747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:48.120 [2024-11-20 13:58:55.671761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.082 ms 00:38:48.120 [2024-11-20 13:58:55.671773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.882082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.882139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:48.384 [2024-11-20 13:58:55.882160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 210.689 ms 00:38:48.384 [2024-11-20 13:58:55.882171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.887422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.887497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:48.384 [2024-11-20 13:58:55.887514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.220 ms 00:38:48.384 [2024-11-20 13:58:55.887522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.924145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.924184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:48.384 [2024-11-20 13:58:55.924200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.639 ms 00:38:48.384 [2024-11-20 13:58:55.924208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.945240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.945281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:48.384 [2024-11-20 13:58:55.945296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.031 ms 00:38:48.384 [2024-11-20 13:58:55.945304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.945437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.945449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:48.384 [2024-11-20 13:58:55.945462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:38:48.384 [2024-11-20 13:58:55.945470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:55.979667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:55.979703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:48.384 [2024-11-20 13:58:55.979722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.244 ms 00:38:48.384 [2024-11-20 13:58:55.979730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:56.013769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:56.013820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:38:48.384 [2024-11-20 13:58:56.013833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.065 ms 00:38:48.384 [2024-11-20 13:58:56.013840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:56.047895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:56.047984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:38:48.384 [2024-11-20 13:58:56.048001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.082 ms 00:38:48.384 [2024-11-20 13:58:56.048009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:56.081601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.384 [2024-11-20 13:58:56.081634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:38:48.384 [2024-11-20 13:58:56.081656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.573 ms 00:38:48.384 [2024-11-20 13:58:56.081663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.384 [2024-11-20 13:58:56.081697] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:38:48.384 [2024-11-20 13:58:56.081710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:38:48.384 [2024-11-20 13:58:56.081782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.081998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:38:48.385 [2024-11-20 13:58:56.082550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:38:48.386 [2024-11-20 13:58:56.082557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:38:48.386 [2024-11-20 13:58:56.082566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:38:48.386 [2024-11-20 13:58:56.082580] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:38:48.386 [2024-11-20 13:58:56.082589] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 523ac80f-4f75-4295-9408-9932a7ca89a8 00:38:48.386 [2024-11-20 13:58:56.082596] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:38:48.386 [2024-11-20 13:58:56.082607] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:38:48.386 [2024-11-20 13:58:56.082614] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:38:48.386 [2024-11-20 13:58:56.082623] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:38:48.386 [2024-11-20 13:58:56.082630] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:38:48.386 [2024-11-20 13:58:56.082653] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:38:48.386 [2024-11-20 13:58:56.082660] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:38:48.386 [2024-11-20 13:58:56.082670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:38:48.386 [2024-11-20 13:58:56.082676] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:38:48.386 [2024-11-20 13:58:56.082685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.386 [2024-11-20 13:58:56.082692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:38:48.386 [2024-11-20 13:58:56.082702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.992 ms 00:38:48.386 [2024-11-20 13:58:56.082709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.102353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.649 [2024-11-20 13:58:56.102394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:38:48.649 [2024-11-20 13:58:56.102409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.623 ms 00:38:48.649 [2024-11-20 13:58:56.102419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.103013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:48.649 [2024-11-20 13:58:56.103041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:38:48.649 [2024-11-20 13:58:56.103052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:38:48.649 [2024-11-20 13:58:56.103060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.155989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.649 [2024-11-20 13:58:56.156028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:48.649 [2024-11-20 13:58:56.156043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.649 [2024-11-20 13:58:56.156051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.156112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.649 [2024-11-20 13:58:56.156120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:48.649 [2024-11-20 13:58:56.156129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.649 [2024-11-20 13:58:56.156136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.156211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.649 [2024-11-20 13:58:56.156224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:48.649 [2024-11-20 13:58:56.156234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.649 [2024-11-20 13:58:56.156242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.156260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.649 [2024-11-20 13:58:56.156268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:48.649 [2024-11-20 13:58:56.156279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.649 [2024-11-20 13:58:56.156286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.649 [2024-11-20 13:58:56.275928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.649 [2024-11-20 13:58:56.275994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:48.649 [2024-11-20 13:58:56.276012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.649 [2024-11-20 13:58:56.276020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.909 [2024-11-20 13:58:56.370490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.909 [2024-11-20 13:58:56.370632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:48.909 [2024-11-20 13:58:56.370652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.909 [2024-11-20 13:58:56.370659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.909 [2024-11-20 13:58:56.370807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.909 [2024-11-20 13:58:56.370822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:48.909 [2024-11-20 13:58:56.370833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.909 [2024-11-20 13:58:56.370841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.909 [2024-11-20 13:58:56.370889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.909 [2024-11-20 13:58:56.370899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:48.909 [2024-11-20 13:58:56.370910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.909 [2024-11-20 13:58:56.370919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.909 [2024-11-20 13:58:56.371035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.909 [2024-11-20 13:58:56.371048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:48.909 [2024-11-20 13:58:56.371064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.909 [2024-11-20 13:58:56.371071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.909 [2024-11-20 13:58:56.371109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.909 [2024-11-20 13:58:56.371120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:38:48.909 [2024-11-20 13:58:56.371131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.909 [2024-11-20 13:58:56.371139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.910 [2024-11-20 13:58:56.371181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.910 [2024-11-20 13:58:56.371190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:48.910 [2024-11-20 13:58:56.371203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.910 [2024-11-20 13:58:56.371210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.910 [2024-11-20 13:58:56.371257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:38:48.910 [2024-11-20 13:58:56.371278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:48.910 [2024-11-20 13:58:56.371288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:38:48.910 [2024-11-20 13:58:56.371296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:48.910 [2024-11-20 13:58:56.371433] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 707.421 ms, result 0 00:38:48.910 true 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 78710 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 78710 ']' 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 78710 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78710 00:38:48.910 killing process with pid 78710 00:38:48.910 Received shutdown signal, test time was about 4.000000 seconds 00:38:48.910 00:38:48.910 Latency(us) 00:38:48.910 [2024-11-20T13:58:56.629Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.910 [2024-11-20T13:58:56.629Z] =================================================================================================================== 00:38:48.910 [2024-11-20T13:58:56.629Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78710' 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 78710 00:38:48.910 13:58:56 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 78710 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:38:50.292 Remove shared memory files 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:38:50.292 ************************************ 00:38:50.292 END TEST ftl_bdevperf 00:38:50.292 ************************************ 00:38:50.292 00:38:50.292 real 0m23.265s 00:38:50.292 user 0m25.968s 00:38:50.292 sys 0m1.116s 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:50.292 13:58:57 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:38:50.292 13:58:57 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:38:50.292 13:58:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:50.292 13:58:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:50.292 13:58:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:38:50.292 ************************************ 00:38:50.292 START TEST ftl_trim 00:38:50.292 ************************************ 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:38:50.292 * Looking for test storage... 00:38:50.292 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lcov --version 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:50.292 13:58:57 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.292 --rc genhtml_branch_coverage=1 00:38:50.292 --rc genhtml_function_coverage=1 00:38:50.292 --rc genhtml_legend=1 00:38:50.292 --rc geninfo_all_blocks=1 00:38:50.292 --rc geninfo_unexecuted_blocks=1 00:38:50.292 00:38:50.292 ' 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.292 --rc genhtml_branch_coverage=1 00:38:50.292 --rc genhtml_function_coverage=1 00:38:50.292 --rc genhtml_legend=1 00:38:50.292 --rc geninfo_all_blocks=1 00:38:50.292 --rc geninfo_unexecuted_blocks=1 00:38:50.292 00:38:50.292 ' 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.292 --rc genhtml_branch_coverage=1 00:38:50.292 --rc genhtml_function_coverage=1 00:38:50.292 --rc genhtml_legend=1 00:38:50.292 --rc geninfo_all_blocks=1 00:38:50.292 --rc geninfo_unexecuted_blocks=1 00:38:50.292 00:38:50.292 ' 00:38:50.292 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:50.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:50.292 --rc genhtml_branch_coverage=1 00:38:50.292 --rc genhtml_function_coverage=1 00:38:50.292 --rc genhtml_legend=1 00:38:50.292 --rc geninfo_all_blocks=1 00:38:50.292 --rc geninfo_unexecuted_blocks=1 00:38:50.292 00:38:50.292 ' 00:38:50.292 13:58:57 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:38:50.292 13:58:57 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:38:50.292 13:58:57 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:38:50.292 13:58:57 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=79059 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 79059 00:38:50.293 13:58:57 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79059 ']' 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:50.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:50.293 13:58:57 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:38:50.293 [2024-11-20 13:58:58.002840] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:38:50.293 [2024-11-20 13:58:58.003066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79059 ] 00:38:50.552 [2024-11-20 13:58:58.185898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:50.811 [2024-11-20 13:58:58.304389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:50.811 [2024-11-20 13:58:58.304551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:50.811 [2024-11-20 13:58:58.304588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:38:51.748 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.748 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:38:51.748 13:58:59 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:38:52.007 13:58:59 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:38:52.007 13:58:59 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:38:52.007 13:58:59 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:38:52.007 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:38:52.007 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:52.007 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:52.007 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:52.007 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:38:52.266 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:52.266 { 00:38:52.266 "name": "nvme0n1", 00:38:52.266 "aliases": [ 00:38:52.266 "16b8043f-7e98-42e0-95d8-5f36e13e4584" 00:38:52.266 ], 00:38:52.266 "product_name": "NVMe disk", 00:38:52.266 "block_size": 4096, 00:38:52.266 "num_blocks": 1310720, 00:38:52.267 "uuid": "16b8043f-7e98-42e0-95d8-5f36e13e4584", 00:38:52.267 "numa_id": -1, 00:38:52.267 "assigned_rate_limits": { 00:38:52.267 "rw_ios_per_sec": 0, 00:38:52.267 "rw_mbytes_per_sec": 0, 00:38:52.267 "r_mbytes_per_sec": 0, 00:38:52.267 "w_mbytes_per_sec": 0 00:38:52.267 }, 00:38:52.267 "claimed": true, 00:38:52.267 "claim_type": "read_many_write_one", 00:38:52.267 "zoned": false, 00:38:52.267 "supported_io_types": { 00:38:52.267 "read": true, 00:38:52.267 "write": true, 00:38:52.267 "unmap": true, 00:38:52.267 "flush": true, 00:38:52.267 "reset": true, 00:38:52.267 "nvme_admin": true, 00:38:52.267 "nvme_io": true, 00:38:52.267 "nvme_io_md": false, 00:38:52.267 "write_zeroes": true, 00:38:52.267 "zcopy": false, 00:38:52.267 "get_zone_info": false, 00:38:52.267 "zone_management": false, 00:38:52.267 "zone_append": false, 00:38:52.267 "compare": true, 00:38:52.267 "compare_and_write": false, 00:38:52.267 "abort": true, 00:38:52.267 "seek_hole": false, 00:38:52.267 "seek_data": false, 00:38:52.267 "copy": true, 00:38:52.267 "nvme_iov_md": false 00:38:52.267 }, 00:38:52.267 "driver_specific": { 00:38:52.267 "nvme": [ 00:38:52.267 { 00:38:52.267 "pci_address": "0000:00:11.0", 00:38:52.267 "trid": { 00:38:52.267 "trtype": "PCIe", 00:38:52.267 "traddr": "0000:00:11.0" 00:38:52.267 }, 00:38:52.267 "ctrlr_data": { 00:38:52.267 "cntlid": 0, 00:38:52.267 "vendor_id": "0x1b36", 00:38:52.267 "model_number": "QEMU NVMe Ctrl", 00:38:52.267 "serial_number": "12341", 00:38:52.267 "firmware_revision": "8.0.0", 00:38:52.267 "subnqn": "nqn.2019-08.org.qemu:12341", 00:38:52.267 "oacs": { 00:38:52.267 "security": 0, 00:38:52.267 "format": 1, 00:38:52.267 "firmware": 0, 00:38:52.267 "ns_manage": 1 00:38:52.267 }, 00:38:52.267 "multi_ctrlr": false, 00:38:52.267 "ana_reporting": false 00:38:52.267 }, 00:38:52.267 "vs": { 00:38:52.267 "nvme_version": "1.4" 00:38:52.267 }, 00:38:52.267 "ns_data": { 00:38:52.267 "id": 1, 00:38:52.267 "can_share": false 00:38:52.267 } 00:38:52.267 } 00:38:52.267 ], 00:38:52.267 "mp_policy": "active_passive" 00:38:52.267 } 00:38:52.267 } 00:38:52.267 ]' 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:38:52.267 13:58:59 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:38:52.267 13:58:59 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:38:52.267 13:58:59 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:38:52.267 13:58:59 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:38:52.267 13:58:59 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:38:52.267 13:58:59 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:38:52.526 13:59:00 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=f097a777-60ec-420a-97ba-2961d15d31f9 00:38:52.526 13:59:00 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:38:52.526 13:59:00 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f097a777-60ec-420a-97ba-2961d15d31f9 00:38:52.784 13:59:00 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:38:53.043 13:59:00 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=edb534ee-0fe6-42ca-8693-c2d014d4d1a7 00:38:53.043 13:59:00 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u edb534ee-0fe6-42ca-8693-c2d014d4d1a7 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:38:53.301 13:59:00 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.301 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:53.301 { 00:38:53.301 "name": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:53.301 "aliases": [ 00:38:53.301 "lvs/nvme0n1p0" 00:38:53.301 ], 00:38:53.301 "product_name": "Logical Volume", 00:38:53.301 "block_size": 4096, 00:38:53.301 "num_blocks": 26476544, 00:38:53.301 "uuid": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:53.301 "assigned_rate_limits": { 00:38:53.301 "rw_ios_per_sec": 0, 00:38:53.301 "rw_mbytes_per_sec": 0, 00:38:53.301 "r_mbytes_per_sec": 0, 00:38:53.301 "w_mbytes_per_sec": 0 00:38:53.301 }, 00:38:53.301 "claimed": false, 00:38:53.301 "zoned": false, 00:38:53.301 "supported_io_types": { 00:38:53.301 "read": true, 00:38:53.301 "write": true, 00:38:53.302 "unmap": true, 00:38:53.302 "flush": false, 00:38:53.302 "reset": true, 00:38:53.302 "nvme_admin": false, 00:38:53.302 "nvme_io": false, 00:38:53.302 "nvme_io_md": false, 00:38:53.302 "write_zeroes": true, 00:38:53.302 "zcopy": false, 00:38:53.302 "get_zone_info": false, 00:38:53.302 "zone_management": false, 00:38:53.302 "zone_append": false, 00:38:53.302 "compare": false, 00:38:53.302 "compare_and_write": false, 00:38:53.302 "abort": false, 00:38:53.302 "seek_hole": true, 00:38:53.302 "seek_data": true, 00:38:53.302 "copy": false, 00:38:53.302 "nvme_iov_md": false 00:38:53.302 }, 00:38:53.302 "driver_specific": { 00:38:53.302 "lvol": { 00:38:53.302 "lvol_store_uuid": "edb534ee-0fe6-42ca-8693-c2d014d4d1a7", 00:38:53.302 "base_bdev": "nvme0n1", 00:38:53.302 "thin_provision": true, 00:38:53.302 "num_allocated_clusters": 0, 00:38:53.302 "snapshot": false, 00:38:53.302 "clone": false, 00:38:53.302 "esnap_clone": false 00:38:53.302 } 00:38:53.302 } 00:38:53.302 } 00:38:53.302 ]' 00:38:53.302 13:59:00 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:53.560 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:53.560 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:53.560 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:53.560 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:53.560 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:53.560 13:59:01 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:38:53.560 13:59:01 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:38:53.560 13:59:01 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:38:53.819 13:59:01 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:38:53.819 13:59:01 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:38:53.819 13:59:01 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.819 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:53.819 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:53.819 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:53.819 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:53.819 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:54.078 { 00:38:54.078 "name": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:54.078 "aliases": [ 00:38:54.078 "lvs/nvme0n1p0" 00:38:54.078 ], 00:38:54.078 "product_name": "Logical Volume", 00:38:54.078 "block_size": 4096, 00:38:54.078 "num_blocks": 26476544, 00:38:54.078 "uuid": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:54.078 "assigned_rate_limits": { 00:38:54.078 "rw_ios_per_sec": 0, 00:38:54.078 "rw_mbytes_per_sec": 0, 00:38:54.078 "r_mbytes_per_sec": 0, 00:38:54.078 "w_mbytes_per_sec": 0 00:38:54.078 }, 00:38:54.078 "claimed": false, 00:38:54.078 "zoned": false, 00:38:54.078 "supported_io_types": { 00:38:54.078 "read": true, 00:38:54.078 "write": true, 00:38:54.078 "unmap": true, 00:38:54.078 "flush": false, 00:38:54.078 "reset": true, 00:38:54.078 "nvme_admin": false, 00:38:54.078 "nvme_io": false, 00:38:54.078 "nvme_io_md": false, 00:38:54.078 "write_zeroes": true, 00:38:54.078 "zcopy": false, 00:38:54.078 "get_zone_info": false, 00:38:54.078 "zone_management": false, 00:38:54.078 "zone_append": false, 00:38:54.078 "compare": false, 00:38:54.078 "compare_and_write": false, 00:38:54.078 "abort": false, 00:38:54.078 "seek_hole": true, 00:38:54.078 "seek_data": true, 00:38:54.078 "copy": false, 00:38:54.078 "nvme_iov_md": false 00:38:54.078 }, 00:38:54.078 "driver_specific": { 00:38:54.078 "lvol": { 00:38:54.078 "lvol_store_uuid": "edb534ee-0fe6-42ca-8693-c2d014d4d1a7", 00:38:54.078 "base_bdev": "nvme0n1", 00:38:54.078 "thin_provision": true, 00:38:54.078 "num_allocated_clusters": 0, 00:38:54.078 "snapshot": false, 00:38:54.078 "clone": false, 00:38:54.078 "esnap_clone": false 00:38:54.078 } 00:38:54.078 } 00:38:54.078 } 00:38:54.078 ]' 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:54.078 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:54.078 13:59:01 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:38:54.078 13:59:01 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:38:54.338 13:59:01 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:38:54.338 13:59:01 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:38:54.338 13:59:01 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:54.338 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:54.338 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:38:54.338 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:38:54.338 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:38:54.338 13:59:01 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f0aa948e-2896-4409-a611-4eef4cbc0ea4 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:38:54.598 { 00:38:54.598 "name": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:54.598 "aliases": [ 00:38:54.598 "lvs/nvme0n1p0" 00:38:54.598 ], 00:38:54.598 "product_name": "Logical Volume", 00:38:54.598 "block_size": 4096, 00:38:54.598 "num_blocks": 26476544, 00:38:54.598 "uuid": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:54.598 "assigned_rate_limits": { 00:38:54.598 "rw_ios_per_sec": 0, 00:38:54.598 "rw_mbytes_per_sec": 0, 00:38:54.598 "r_mbytes_per_sec": 0, 00:38:54.598 "w_mbytes_per_sec": 0 00:38:54.598 }, 00:38:54.598 "claimed": false, 00:38:54.598 "zoned": false, 00:38:54.598 "supported_io_types": { 00:38:54.598 "read": true, 00:38:54.598 "write": true, 00:38:54.598 "unmap": true, 00:38:54.598 "flush": false, 00:38:54.598 "reset": true, 00:38:54.598 "nvme_admin": false, 00:38:54.598 "nvme_io": false, 00:38:54.598 "nvme_io_md": false, 00:38:54.598 "write_zeroes": true, 00:38:54.598 "zcopy": false, 00:38:54.598 "get_zone_info": false, 00:38:54.598 "zone_management": false, 00:38:54.598 "zone_append": false, 00:38:54.598 "compare": false, 00:38:54.598 "compare_and_write": false, 00:38:54.598 "abort": false, 00:38:54.598 "seek_hole": true, 00:38:54.598 "seek_data": true, 00:38:54.598 "copy": false, 00:38:54.598 "nvme_iov_md": false 00:38:54.598 }, 00:38:54.598 "driver_specific": { 00:38:54.598 "lvol": { 00:38:54.598 "lvol_store_uuid": "edb534ee-0fe6-42ca-8693-c2d014d4d1a7", 00:38:54.598 "base_bdev": "nvme0n1", 00:38:54.598 "thin_provision": true, 00:38:54.598 "num_allocated_clusters": 0, 00:38:54.598 "snapshot": false, 00:38:54.598 "clone": false, 00:38:54.598 "esnap_clone": false 00:38:54.598 } 00:38:54.598 } 00:38:54.598 } 00:38:54.598 ]' 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:38:54.598 13:59:02 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:38:54.598 13:59:02 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:38:54.598 13:59:02 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f0aa948e-2896-4409-a611-4eef4cbc0ea4 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:38:54.858 [2024-11-20 13:59:02.429625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.858 [2024-11-20 13:59:02.429731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:38:54.858 [2024-11-20 13:59:02.429771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:54.858 [2024-11-20 13:59:02.429780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.858 [2024-11-20 13:59:02.432711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.858 [2024-11-20 13:59:02.432817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:38:54.858 [2024-11-20 13:59:02.432834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.904 ms 00:38:54.858 [2024-11-20 13:59:02.432843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.858 [2024-11-20 13:59:02.432959] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:38:54.859 [2024-11-20 13:59:02.433928] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:38:54.859 [2024-11-20 13:59:02.433956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.433965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:38:54.859 [2024-11-20 13:59:02.433977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.008 ms 00:38:54.859 [2024-11-20 13:59:02.433984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.434084] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID de3f65f3-0236-49e4-9c47-e95c64595e9c 00:38:54.859 [2024-11-20 13:59:02.435496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.435540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:38:54.859 [2024-11-20 13:59:02.435551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:38:54.859 [2024-11-20 13:59:02.435562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.443115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.443229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:38:54.859 [2024-11-20 13:59:02.443246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.468 ms 00:38:54.859 [2024-11-20 13:59:02.443257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.443435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.443453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:38:54.859 [2024-11-20 13:59:02.443464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:38:54.859 [2024-11-20 13:59:02.443481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.443520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.443534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:38:54.859 [2024-11-20 13:59:02.443542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:38:54.859 [2024-11-20 13:59:02.443554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.443597] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:38:54.859 [2024-11-20 13:59:02.448248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.448282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:38:54.859 [2024-11-20 13:59:02.448295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.670 ms 00:38:54.859 [2024-11-20 13:59:02.448303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.448363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.448374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:38:54.859 [2024-11-20 13:59:02.448385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:38:54.859 [2024-11-20 13:59:02.448411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.448445] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:38:54.859 [2024-11-20 13:59:02.448578] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:38:54.859 [2024-11-20 13:59:02.448596] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:38:54.859 [2024-11-20 13:59:02.448607] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:38:54.859 [2024-11-20 13:59:02.448620] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:38:54.859 [2024-11-20 13:59:02.448629] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:38:54.859 [2024-11-20 13:59:02.448640] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:38:54.859 [2024-11-20 13:59:02.448649] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:38:54.859 [2024-11-20 13:59:02.448658] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:38:54.859 [2024-11-20 13:59:02.448667] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:38:54.859 [2024-11-20 13:59:02.448677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.448685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:38:54.859 [2024-11-20 13:59:02.448695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:38:54.859 [2024-11-20 13:59:02.448703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.448809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.859 [2024-11-20 13:59:02.448821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:38:54.859 [2024-11-20 13:59:02.448832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:38:54.859 [2024-11-20 13:59:02.448839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.859 [2024-11-20 13:59:02.448956] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:38:54.859 [2024-11-20 13:59:02.448967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:38:54.859 [2024-11-20 13:59:02.448978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.859 [2024-11-20 13:59:02.448986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.448996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:38:54.859 [2024-11-20 13:59:02.449003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449030] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:38:54.859 [2024-11-20 13:59:02.449039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.859 [2024-11-20 13:59:02.449054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:38:54.859 [2024-11-20 13:59:02.449062] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:38:54.859 [2024-11-20 13:59:02.449070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:38:54.859 [2024-11-20 13:59:02.449077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:38:54.859 [2024-11-20 13:59:02.449087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:38:54.859 [2024-11-20 13:59:02.449094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:38:54.859 [2024-11-20 13:59:02.449111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449120] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:38:54.859 [2024-11-20 13:59:02.449137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:38:54.859 [2024-11-20 13:59:02.449159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:38:54.859 [2024-11-20 13:59:02.449183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449190] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449198] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:38:54.859 [2024-11-20 13:59:02.449204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:38:54.859 [2024-11-20 13:59:02.449229] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.859 [2024-11-20 13:59:02.449243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:38:54.859 [2024-11-20 13:59:02.449249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:38:54.859 [2024-11-20 13:59:02.449257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:38:54.859 [2024-11-20 13:59:02.449264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:38:54.859 [2024-11-20 13:59:02.449273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:38:54.859 [2024-11-20 13:59:02.449280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:38:54.859 [2024-11-20 13:59:02.449294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:38:54.859 [2024-11-20 13:59:02.449302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:38:54.859 [2024-11-20 13:59:02.449317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:38:54.859 [2024-11-20 13:59:02.449325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:38:54.859 [2024-11-20 13:59:02.449341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:38:54.859 [2024-11-20 13:59:02.449353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:38:54.859 [2024-11-20 13:59:02.449359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:38:54.859 [2024-11-20 13:59:02.449368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:38:54.859 [2024-11-20 13:59:02.449375] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:38:54.859 [2024-11-20 13:59:02.449383] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:38:54.859 [2024-11-20 13:59:02.449394] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:38:54.860 [2024-11-20 13:59:02.449407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:38:54.860 [2024-11-20 13:59:02.449426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:38:54.860 [2024-11-20 13:59:02.449433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:38:54.860 [2024-11-20 13:59:02.449442] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:38:54.860 [2024-11-20 13:59:02.449449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:38:54.860 [2024-11-20 13:59:02.449458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:38:54.860 [2024-11-20 13:59:02.449465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:38:54.860 [2024-11-20 13:59:02.449474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:38:54.860 [2024-11-20 13:59:02.449481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:38:54.860 [2024-11-20 13:59:02.449492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449524] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:38:54.860 [2024-11-20 13:59:02.449531] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:38:54.860 [2024-11-20 13:59:02.449546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449553] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:38:54.860 [2024-11-20 13:59:02.449562] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:38:54.860 [2024-11-20 13:59:02.449569] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:38:54.860 [2024-11-20 13:59:02.449579] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:38:54.860 [2024-11-20 13:59:02.449587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:54.860 [2024-11-20 13:59:02.449597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:38:54.860 [2024-11-20 13:59:02.449605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.689 ms 00:38:54.860 [2024-11-20 13:59:02.449614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:54.860 [2024-11-20 13:59:02.449692] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:38:54.860 [2024-11-20 13:59:02.449721] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:38:58.201 [2024-11-20 13:59:05.764746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.201 [2024-11-20 13:59:05.764901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:38:58.201 [2024-11-20 13:59:05.764936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3321.446 ms 00:38:58.201 [2024-11-20 13:59:05.764961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.201 [2024-11-20 13:59:05.802591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.201 [2024-11-20 13:59:05.802732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:38:58.201 [2024-11-20 13:59:05.802767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.317 ms 00:38:58.201 [2024-11-20 13:59:05.802791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.201 [2024-11-20 13:59:05.802980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.201 [2024-11-20 13:59:05.803023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:38:58.201 [2024-11-20 13:59:05.803060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:38:58.201 [2024-11-20 13:59:05.803095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.201 [2024-11-20 13:59:05.863307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.201 [2024-11-20 13:59:05.863426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:38:58.202 [2024-11-20 13:59:05.863456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.246 ms 00:38:58.202 [2024-11-20 13:59:05.863480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.202 [2024-11-20 13:59:05.863588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.202 [2024-11-20 13:59:05.863644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:38:58.202 [2024-11-20 13:59:05.863750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:38:58.202 [2024-11-20 13:59:05.863787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.202 [2024-11-20 13:59:05.864245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.202 [2024-11-20 13:59:05.864293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:38:58.202 [2024-11-20 13:59:05.864324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:38:58.202 [2024-11-20 13:59:05.864359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.202 [2024-11-20 13:59:05.864483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.202 [2024-11-20 13:59:05.864516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:38:58.202 [2024-11-20 13:59:05.864544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:38:58.202 [2024-11-20 13:59:05.864577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.202 [2024-11-20 13:59:05.884488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.202 [2024-11-20 13:59:05.884585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:38:58.202 [2024-11-20 13:59:05.884615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.867 ms 00:38:58.202 [2024-11-20 13:59:05.884639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.202 [2024-11-20 13:59:05.897055] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:38:58.202 [2024-11-20 13:59:05.913439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.202 [2024-11-20 13:59:05.913571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:38:58.202 [2024-11-20 13:59:05.913605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.698 ms 00:38:58.202 [2024-11-20 13:59:05.913625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.461 [2024-11-20 13:59:06.012806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.461 [2024-11-20 13:59:06.012954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:38:58.461 [2024-11-20 13:59:06.012990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 99.199 ms 00:38:58.461 [2024-11-20 13:59:06.013022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.461 [2024-11-20 13:59:06.013266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.461 [2024-11-20 13:59:06.013309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:38:58.461 [2024-11-20 13:59:06.013345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:38:58.461 [2024-11-20 13:59:06.013372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.461 [2024-11-20 13:59:06.049200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.461 [2024-11-20 13:59:06.049282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:38:58.461 [2024-11-20 13:59:06.049314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.837 ms 00:38:58.461 [2024-11-20 13:59:06.049336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.461 [2024-11-20 13:59:06.083857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.461 [2024-11-20 13:59:06.083933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:38:58.461 [2024-11-20 13:59:06.083975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.494 ms 00:38:58.461 [2024-11-20 13:59:06.083995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.461 [2024-11-20 13:59:06.084809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.461 [2024-11-20 13:59:06.084867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:38:58.461 [2024-11-20 13:59:06.084900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:38:58.461 [2024-11-20 13:59:06.084932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.186915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.187053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:38:58.721 [2024-11-20 13:59:06.187096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.117 ms 00:38:58.721 [2024-11-20 13:59:06.187126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.224923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.225031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:38:58.721 [2024-11-20 13:59:06.225065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.741 ms 00:38:58.721 [2024-11-20 13:59:06.225088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.261776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.261864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:38:58.721 [2024-11-20 13:59:06.261895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.638 ms 00:38:58.721 [2024-11-20 13:59:06.261917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.297758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.297845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:38:58.721 [2024-11-20 13:59:06.297877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.817 ms 00:38:58.721 [2024-11-20 13:59:06.297919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.298015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.298057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:38:58.721 [2024-11-20 13:59:06.298091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:38:58.721 [2024-11-20 13:59:06.298117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.298229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:58.721 [2024-11-20 13:59:06.298260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:38:58.721 [2024-11-20 13:59:06.298292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:38:58.721 [2024-11-20 13:59:06.298323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:58.721 [2024-11-20 13:59:06.299353] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:38:58.721 [2024-11-20 13:59:06.303780] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3876.904 ms, result 0 00:38:58.721 [2024-11-20 13:59:06.304684] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:38:58.721 { 00:38:58.721 "name": "ftl0", 00:38:58.721 "uuid": "de3f65f3-0236-49e4-9c47-e95c64595e9c" 00:38:58.721 } 00:38:58.721 13:59:06 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:58.721 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:58.994 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:38:59.254 [ 00:38:59.254 { 00:38:59.254 "name": "ftl0", 00:38:59.254 "aliases": [ 00:38:59.254 "de3f65f3-0236-49e4-9c47-e95c64595e9c" 00:38:59.254 ], 00:38:59.254 "product_name": "FTL disk", 00:38:59.254 "block_size": 4096, 00:38:59.254 "num_blocks": 23592960, 00:38:59.254 "uuid": "de3f65f3-0236-49e4-9c47-e95c64595e9c", 00:38:59.254 "assigned_rate_limits": { 00:38:59.254 "rw_ios_per_sec": 0, 00:38:59.254 "rw_mbytes_per_sec": 0, 00:38:59.254 "r_mbytes_per_sec": 0, 00:38:59.254 "w_mbytes_per_sec": 0 00:38:59.254 }, 00:38:59.254 "claimed": false, 00:38:59.254 "zoned": false, 00:38:59.254 "supported_io_types": { 00:38:59.254 "read": true, 00:38:59.254 "write": true, 00:38:59.254 "unmap": true, 00:38:59.254 "flush": true, 00:38:59.254 "reset": false, 00:38:59.254 "nvme_admin": false, 00:38:59.254 "nvme_io": false, 00:38:59.254 "nvme_io_md": false, 00:38:59.254 "write_zeroes": true, 00:38:59.254 "zcopy": false, 00:38:59.254 "get_zone_info": false, 00:38:59.254 "zone_management": false, 00:38:59.254 "zone_append": false, 00:38:59.254 "compare": false, 00:38:59.254 "compare_and_write": false, 00:38:59.254 "abort": false, 00:38:59.254 "seek_hole": false, 00:38:59.254 "seek_data": false, 00:38:59.254 "copy": false, 00:38:59.254 "nvme_iov_md": false 00:38:59.254 }, 00:38:59.254 "driver_specific": { 00:38:59.254 "ftl": { 00:38:59.254 "base_bdev": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:59.254 "cache": "nvc0n1p0" 00:38:59.254 } 00:38:59.254 } 00:38:59.254 } 00:38:59.254 ] 00:38:59.254 13:59:06 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:38:59.254 13:59:06 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:38:59.254 13:59:06 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:38:59.254 13:59:06 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:38:59.254 13:59:06 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:38:59.513 13:59:07 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:38:59.513 { 00:38:59.513 "name": "ftl0", 00:38:59.513 "aliases": [ 00:38:59.513 "de3f65f3-0236-49e4-9c47-e95c64595e9c" 00:38:59.513 ], 00:38:59.513 "product_name": "FTL disk", 00:38:59.513 "block_size": 4096, 00:38:59.513 "num_blocks": 23592960, 00:38:59.513 "uuid": "de3f65f3-0236-49e4-9c47-e95c64595e9c", 00:38:59.513 "assigned_rate_limits": { 00:38:59.513 "rw_ios_per_sec": 0, 00:38:59.513 "rw_mbytes_per_sec": 0, 00:38:59.513 "r_mbytes_per_sec": 0, 00:38:59.513 "w_mbytes_per_sec": 0 00:38:59.513 }, 00:38:59.513 "claimed": false, 00:38:59.513 "zoned": false, 00:38:59.513 "supported_io_types": { 00:38:59.513 "read": true, 00:38:59.513 "write": true, 00:38:59.513 "unmap": true, 00:38:59.513 "flush": true, 00:38:59.513 "reset": false, 00:38:59.513 "nvme_admin": false, 00:38:59.513 "nvme_io": false, 00:38:59.513 "nvme_io_md": false, 00:38:59.513 "write_zeroes": true, 00:38:59.513 "zcopy": false, 00:38:59.513 "get_zone_info": false, 00:38:59.513 "zone_management": false, 00:38:59.513 "zone_append": false, 00:38:59.513 "compare": false, 00:38:59.513 "compare_and_write": false, 00:38:59.513 "abort": false, 00:38:59.513 "seek_hole": false, 00:38:59.513 "seek_data": false, 00:38:59.513 "copy": false, 00:38:59.513 "nvme_iov_md": false 00:38:59.513 }, 00:38:59.513 "driver_specific": { 00:38:59.513 "ftl": { 00:38:59.513 "base_bdev": "f0aa948e-2896-4409-a611-4eef4cbc0ea4", 00:38:59.513 "cache": "nvc0n1p0" 00:38:59.513 } 00:38:59.513 } 00:38:59.513 } 00:38:59.513 ]' 00:38:59.513 13:59:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:38:59.513 13:59:07 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:38:59.513 13:59:07 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:38:59.773 [2024-11-20 13:59:07.367447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.367504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:38:59.773 [2024-11-20 13:59:07.367522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:38:59.773 [2024-11-20 13:59:07.367536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.773 [2024-11-20 13:59:07.367574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:38:59.773 [2024-11-20 13:59:07.371565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.371604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:38:59.773 [2024-11-20 13:59:07.371640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.965 ms 00:38:59.773 [2024-11-20 13:59:07.371648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.773 [2024-11-20 13:59:07.372214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.372234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:38:59.773 [2024-11-20 13:59:07.372246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.498 ms 00:38:59.773 [2024-11-20 13:59:07.372254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.773 [2024-11-20 13:59:07.374955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.374980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:38:59.773 [2024-11-20 13:59:07.374990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.677 ms 00:38:59.773 [2024-11-20 13:59:07.374998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.773 [2024-11-20 13:59:07.380481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.380511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:38:59.773 [2024-11-20 13:59:07.380522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.457 ms 00:38:59.773 [2024-11-20 13:59:07.380530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.773 [2024-11-20 13:59:07.416326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.773 [2024-11-20 13:59:07.416365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:38:59.773 [2024-11-20 13:59:07.416382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.772 ms 00:38:59.774 [2024-11-20 13:59:07.416391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.774 [2024-11-20 13:59:07.438003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.774 [2024-11-20 13:59:07.438039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:38:59.774 [2024-11-20 13:59:07.438054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.572 ms 00:38:59.774 [2024-11-20 13:59:07.438066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.774 [2024-11-20 13:59:07.438274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.774 [2024-11-20 13:59:07.438286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:38:59.774 [2024-11-20 13:59:07.438297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:38:59.774 [2024-11-20 13:59:07.438305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:38:59.774 [2024-11-20 13:59:07.473438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:38:59.774 [2024-11-20 13:59:07.473475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:38:59.774 [2024-11-20 13:59:07.473488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.167 ms 00:38:59.774 [2024-11-20 13:59:07.473496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.035 [2024-11-20 13:59:07.508700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.035 [2024-11-20 13:59:07.508743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:00.035 [2024-11-20 13:59:07.508760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.195 ms 00:39:00.035 [2024-11-20 13:59:07.508767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.035 [2024-11-20 13:59:07.543254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.035 [2024-11-20 13:59:07.543333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:00.035 [2024-11-20 13:59:07.543351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.474 ms 00:39:00.035 [2024-11-20 13:59:07.543358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.035 [2024-11-20 13:59:07.578459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.035 [2024-11-20 13:59:07.578493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:00.035 [2024-11-20 13:59:07.578506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.038 ms 00:39:00.035 [2024-11-20 13:59:07.578513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.035 [2024-11-20 13:59:07.578592] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:00.035 [2024-11-20 13:59:07.578608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.578993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:00.035 [2024-11-20 13:59:07.579169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:00.036 [2024-11-20 13:59:07.579563] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:00.036 [2024-11-20 13:59:07.579575] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:00.036 [2024-11-20 13:59:07.579583] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:00.036 [2024-11-20 13:59:07.579598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:00.036 [2024-11-20 13:59:07.579605] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:00.036 [2024-11-20 13:59:07.579618] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:00.036 [2024-11-20 13:59:07.579625] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:00.036 [2024-11-20 13:59:07.579634] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:00.036 [2024-11-20 13:59:07.579642] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:00.036 [2024-11-20 13:59:07.579650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:00.036 [2024-11-20 13:59:07.579657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:00.036 [2024-11-20 13:59:07.579676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.036 [2024-11-20 13:59:07.579684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:00.036 [2024-11-20 13:59:07.579698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.079 ms 00:39:00.036 [2024-11-20 13:59:07.579706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.599208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.036 [2024-11-20 13:59:07.599243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:00.036 [2024-11-20 13:59:07.599259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.490 ms 00:39:00.036 [2024-11-20 13:59:07.599266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.599847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:00.036 [2024-11-20 13:59:07.599861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:00.036 [2024-11-20 13:59:07.599875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.515 ms 00:39:00.036 [2024-11-20 13:59:07.599885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.667096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.036 [2024-11-20 13:59:07.667141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:00.036 [2024-11-20 13:59:07.667154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.036 [2024-11-20 13:59:07.667162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.667307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.036 [2024-11-20 13:59:07.667334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:00.036 [2024-11-20 13:59:07.667344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.036 [2024-11-20 13:59:07.667353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.667426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.036 [2024-11-20 13:59:07.667438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:00.036 [2024-11-20 13:59:07.667453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.036 [2024-11-20 13:59:07.667461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.036 [2024-11-20 13:59:07.667495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.036 [2024-11-20 13:59:07.667503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:00.036 [2024-11-20 13:59:07.667513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.036 [2024-11-20 13:59:07.667521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.797337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.797396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:00.297 [2024-11-20 13:59:07.797411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.797419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.895413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.895528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:00.297 [2024-11-20 13:59:07.895547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.895556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.895665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.895675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:00.297 [2024-11-20 13:59:07.895707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.895738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.895793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.895802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:00.297 [2024-11-20 13:59:07.895813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.895821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.895948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.895959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:00.297 [2024-11-20 13:59:07.895970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.895981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.896039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.896050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:00.297 [2024-11-20 13:59:07.896060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.896068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.896127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.896138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:00.297 [2024-11-20 13:59:07.896150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.896157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.896221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:00.297 [2024-11-20 13:59:07.896230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:00.297 [2024-11-20 13:59:07.896241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:00.297 [2024-11-20 13:59:07.896249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:00.297 [2024-11-20 13:59:07.896450] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 530.000 ms, result 0 00:39:00.297 true 00:39:00.297 13:59:07 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 79059 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79059 ']' 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79059 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79059 00:39:00.297 killing process with pid 79059 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79059' 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79059 00:39:00.297 13:59:07 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79059 00:39:08.425 13:59:14 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:39:08.425 65536+0 records in 00:39:08.425 65536+0 records out 00:39:08.425 268435456 bytes (268 MB, 256 MiB) copied, 0.86367 s, 311 MB/s 00:39:08.425 13:59:15 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:08.425 [2024-11-20 13:59:15.739915] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:08.425 [2024-11-20 13:59:15.740130] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79308 ] 00:39:08.425 [2024-11-20 13:59:15.916209] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.425 [2024-11-20 13:59:16.029553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.684 [2024-11-20 13:59:16.381434] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:08.684 [2024-11-20 13:59:16.381591] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:08.946 [2024-11-20 13:59:16.540584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.540633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:08.946 [2024-11-20 13:59:16.540646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:08.946 [2024-11-20 13:59:16.540654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.543514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.543550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:08.946 [2024-11-20 13:59:16.543559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.847 ms 00:39:08.946 [2024-11-20 13:59:16.543566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.543671] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:08.946 [2024-11-20 13:59:16.544638] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:08.946 [2024-11-20 13:59:16.544672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.544681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:08.946 [2024-11-20 13:59:16.544689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:39:08.946 [2024-11-20 13:59:16.544697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.546206] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:08.946 [2024-11-20 13:59:16.564428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.564467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:08.946 [2024-11-20 13:59:16.564479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.258 ms 00:39:08.946 [2024-11-20 13:59:16.564487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.564579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.564590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:08.946 [2024-11-20 13:59:16.564599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:39:08.946 [2024-11-20 13:59:16.564607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.571329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.571356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:08.946 [2024-11-20 13:59:16.571366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.696 ms 00:39:08.946 [2024-11-20 13:59:16.571373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.571490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.571503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:08.946 [2024-11-20 13:59:16.571512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:39:08.946 [2024-11-20 13:59:16.571520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.571549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.571560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:08.946 [2024-11-20 13:59:16.571568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:08.946 [2024-11-20 13:59:16.571575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.571603] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:08.946 [2024-11-20 13:59:16.576379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.576442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:08.946 [2024-11-20 13:59:16.576470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.798 ms 00:39:08.946 [2024-11-20 13:59:16.576489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.576560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.576585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:08.946 [2024-11-20 13:59:16.576622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:08.946 [2024-11-20 13:59:16.576641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.576674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:08.946 [2024-11-20 13:59:16.576757] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:08.946 [2024-11-20 13:59:16.576825] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:08.946 [2024-11-20 13:59:16.576882] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:08.946 [2024-11-20 13:59:16.577004] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:08.946 [2024-11-20 13:59:16.577043] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:08.946 [2024-11-20 13:59:16.577088] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:08.946 [2024-11-20 13:59:16.577134] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:08.946 [2024-11-20 13:59:16.577174] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:08.946 [2024-11-20 13:59:16.577212] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:08.946 [2024-11-20 13:59:16.577240] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:08.946 [2024-11-20 13:59:16.577260] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:08.946 [2024-11-20 13:59:16.577280] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:08.946 [2024-11-20 13:59:16.577301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.577336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:08.946 [2024-11-20 13:59:16.577366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.631 ms 00:39:08.946 [2024-11-20 13:59:16.577394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.577483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.946 [2024-11-20 13:59:16.577516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:08.946 [2024-11-20 13:59:16.577545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:08.946 [2024-11-20 13:59:16.577565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.946 [2024-11-20 13:59:16.577671] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:08.947 [2024-11-20 13:59:16.577702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:08.947 [2024-11-20 13:59:16.577737] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:08.947 [2024-11-20 13:59:16.577766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.577794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:08.947 [2024-11-20 13:59:16.577813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.577838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:08.947 [2024-11-20 13:59:16.577857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:08.947 [2024-11-20 13:59:16.577876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:08.947 [2024-11-20 13:59:16.577909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:08.947 [2024-11-20 13:59:16.577927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:08.947 [2024-11-20 13:59:16.577947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:08.947 [2024-11-20 13:59:16.577972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:08.947 [2024-11-20 13:59:16.578018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:08.947 [2024-11-20 13:59:16.578046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:08.947 [2024-11-20 13:59:16.578073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:08.947 [2024-11-20 13:59:16.578111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:08.947 [2024-11-20 13:59:16.578188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:08.947 [2024-11-20 13:59:16.578251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:08.947 [2024-11-20 13:59:16.578326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:08.947 [2024-11-20 13:59:16.578390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:08.947 [2024-11-20 13:59:16.578463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:08.947 [2024-11-20 13:59:16.578506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:08.947 [2024-11-20 13:59:16.578526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:08.947 [2024-11-20 13:59:16.578547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:08.947 [2024-11-20 13:59:16.578566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:08.947 [2024-11-20 13:59:16.578591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:08.947 [2024-11-20 13:59:16.578610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:08.947 [2024-11-20 13:59:16.578648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:08.947 [2024-11-20 13:59:16.578682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578701] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:08.947 [2024-11-20 13:59:16.578728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:08.947 [2024-11-20 13:59:16.578749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:08.947 [2024-11-20 13:59:16.578813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:08.947 [2024-11-20 13:59:16.578832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:08.947 [2024-11-20 13:59:16.578861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:08.947 [2024-11-20 13:59:16.578881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:08.947 [2024-11-20 13:59:16.578899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:08.947 [2024-11-20 13:59:16.578931] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:08.947 [2024-11-20 13:59:16.578953] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:08.947 [2024-11-20 13:59:16.578992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:08.947 [2024-11-20 13:59:16.579056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:08.947 [2024-11-20 13:59:16.579065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:08.947 [2024-11-20 13:59:16.579071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:08.947 [2024-11-20 13:59:16.579078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:08.947 [2024-11-20 13:59:16.579085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:08.947 [2024-11-20 13:59:16.579092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:08.947 [2024-11-20 13:59:16.579099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:08.947 [2024-11-20 13:59:16.579105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:08.947 [2024-11-20 13:59:16.579112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:08.947 [2024-11-20 13:59:16.579146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:08.947 [2024-11-20 13:59:16.579154] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:08.947 [2024-11-20 13:59:16.579169] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:08.947 [2024-11-20 13:59:16.579175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:08.947 [2024-11-20 13:59:16.579182] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:08.947 [2024-11-20 13:59:16.579191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.947 [2024-11-20 13:59:16.579199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:08.947 [2024-11-20 13:59:16.579211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.577 ms 00:39:08.947 [2024-11-20 13:59:16.579218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.947 [2024-11-20 13:59:16.617104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.947 [2024-11-20 13:59:16.617195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:08.947 [2024-11-20 13:59:16.617243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.897 ms 00:39:08.948 [2024-11-20 13:59:16.617264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:08.948 [2024-11-20 13:59:16.617409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:08.948 [2024-11-20 13:59:16.617474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:08.948 [2024-11-20 13:59:16.617525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:08.948 [2024-11-20 13:59:16.617546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.208 [2024-11-20 13:59:16.676119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.208 [2024-11-20 13:59:16.676208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:09.208 [2024-11-20 13:59:16.676238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.644 ms 00:39:09.208 [2024-11-20 13:59:16.676263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.208 [2024-11-20 13:59:16.676383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.208 [2024-11-20 13:59:16.676408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:09.208 [2024-11-20 13:59:16.676438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:09.208 [2024-11-20 13:59:16.676460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.208 [2024-11-20 13:59:16.676919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.208 [2024-11-20 13:59:16.676959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:09.208 [2024-11-20 13:59:16.676992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:39:09.208 [2024-11-20 13:59:16.677025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.208 [2024-11-20 13:59:16.677156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.208 [2024-11-20 13:59:16.677193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:09.208 [2024-11-20 13:59:16.677223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:39:09.208 [2024-11-20 13:59:16.677250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.208 [2024-11-20 13:59:16.695925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.208 [2024-11-20 13:59:16.696001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:09.208 [2024-11-20 13:59:16.696028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.675 ms 00:39:09.209 [2024-11-20 13:59:16.696049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.713608] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:39:09.209 [2024-11-20 13:59:16.713690] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:09.209 [2024-11-20 13:59:16.713765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.713787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:09.209 [2024-11-20 13:59:16.713808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:39:09.209 [2024-11-20 13:59:16.713828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.741456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.741524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:09.209 [2024-11-20 13:59:16.741584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.594 ms 00:39:09.209 [2024-11-20 13:59:16.741604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.758906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.758970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:09.209 [2024-11-20 13:59:16.759011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.254 ms 00:39:09.209 [2024-11-20 13:59:16.759030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.775670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.775744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:09.209 [2024-11-20 13:59:16.775788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.597 ms 00:39:09.209 [2024-11-20 13:59:16.775807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.776554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.776614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:09.209 [2024-11-20 13:59:16.776646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:39:09.209 [2024-11-20 13:59:16.776666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.858691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.858841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:09.209 [2024-11-20 13:59:16.858891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.094 ms 00:39:09.209 [2024-11-20 13:59:16.858911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.869066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:09.209 [2024-11-20 13:59:16.885005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.885160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:09.209 [2024-11-20 13:59:16.885207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.013 ms 00:39:09.209 [2024-11-20 13:59:16.885228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.885423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.885456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:09.209 [2024-11-20 13:59:16.885485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:09.209 [2024-11-20 13:59:16.885512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.885589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.885614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:09.209 [2024-11-20 13:59:16.885645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:09.209 [2024-11-20 13:59:16.885665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.885746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.885783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:09.209 [2024-11-20 13:59:16.885818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:39:09.209 [2024-11-20 13:59:16.885846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.885902] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:09.209 [2024-11-20 13:59:16.885937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.885963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:09.209 [2024-11-20 13:59:16.885992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:39:09.209 [2024-11-20 13:59:16.886018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.921594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.921693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:09.209 [2024-11-20 13:59:16.921729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.607 ms 00:39:09.209 [2024-11-20 13:59:16.921751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.921883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:09.209 [2024-11-20 13:59:16.921913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:09.209 [2024-11-20 13:59:16.921979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:39:09.209 [2024-11-20 13:59:16.922000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:09.209 [2024-11-20 13:59:16.923004] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:09.469 [2024-11-20 13:59:16.927494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 382.806 ms, result 0 00:39:09.469 [2024-11-20 13:59:16.928480] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:09.469 [2024-11-20 13:59:16.946862] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:10.408  [2024-11-20T13:59:19.066Z] Copying: 27/256 [MB] (27 MBps) [2024-11-20T13:59:20.009Z] Copying: 55/256 [MB] (28 MBps) [2024-11-20T13:59:20.953Z] Copying: 84/256 [MB] (28 MBps) [2024-11-20T13:59:22.335Z] Copying: 112/256 [MB] (27 MBps) [2024-11-20T13:59:23.275Z] Copying: 140/256 [MB] (27 MBps) [2024-11-20T13:59:24.214Z] Copying: 169/256 [MB] (28 MBps) [2024-11-20T13:59:25.153Z] Copying: 197/256 [MB] (28 MBps) [2024-11-20T13:59:26.092Z] Copying: 226/256 [MB] (28 MBps) [2024-11-20T13:59:26.092Z] Copying: 254/256 [MB] (27 MBps) [2024-11-20T13:59:26.092Z] Copying: 256/256 [MB] (average 28 MBps)[2024-11-20 13:59:26.000992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:18.374 [2024-11-20 13:59:26.015667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.015790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:18.374 [2024-11-20 13:59:26.015808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:18.374 [2024-11-20 13:59:26.015816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.015847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:18.374 [2024-11-20 13:59:26.019881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.019908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:18.374 [2024-11-20 13:59:26.019917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.027 ms 00:39:18.374 [2024-11-20 13:59:26.019925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.021855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.021900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:18.374 [2024-11-20 13:59:26.021911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.910 ms 00:39:18.374 [2024-11-20 13:59:26.021934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.028251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.028285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:18.374 [2024-11-20 13:59:26.028301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.310 ms 00:39:18.374 [2024-11-20 13:59:26.028308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.033747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.033781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:18.374 [2024-11-20 13:59:26.033790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.406 ms 00:39:18.374 [2024-11-20 13:59:26.033797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.068628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.068681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:18.374 [2024-11-20 13:59:26.068693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.851 ms 00:39:18.374 [2024-11-20 13:59:26.068701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.089184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.089222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:18.374 [2024-11-20 13:59:26.089238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.429 ms 00:39:18.374 [2024-11-20 13:59:26.089248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.374 [2024-11-20 13:59:26.089378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.374 [2024-11-20 13:59:26.089389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:18.374 [2024-11-20 13:59:26.089397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:39:18.374 [2024-11-20 13:59:26.089404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.636 [2024-11-20 13:59:26.125344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.636 [2024-11-20 13:59:26.125446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:18.636 [2024-11-20 13:59:26.125461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.993 ms 00:39:18.636 [2024-11-20 13:59:26.125468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.636 [2024-11-20 13:59:26.159820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.636 [2024-11-20 13:59:26.159854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:18.636 [2024-11-20 13:59:26.159864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.368 ms 00:39:18.636 [2024-11-20 13:59:26.159871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.636 [2024-11-20 13:59:26.193425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.636 [2024-11-20 13:59:26.193457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:18.636 [2024-11-20 13:59:26.193467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.555 ms 00:39:18.636 [2024-11-20 13:59:26.193474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.636 [2024-11-20 13:59:26.227278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.636 [2024-11-20 13:59:26.227313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:18.636 [2024-11-20 13:59:26.227322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.800 ms 00:39:18.636 [2024-11-20 13:59:26.227329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.636 [2024-11-20 13:59:26.227373] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:18.636 [2024-11-20 13:59:26.227393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:18.636 [2024-11-20 13:59:26.227807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.227994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:18.637 [2024-11-20 13:59:26.228192] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:18.637 [2024-11-20 13:59:26.228200] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:18.637 [2024-11-20 13:59:26.228208] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:18.637 [2024-11-20 13:59:26.228216] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:18.637 [2024-11-20 13:59:26.228223] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:18.637 [2024-11-20 13:59:26.228231] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:18.637 [2024-11-20 13:59:26.228239] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:18.637 [2024-11-20 13:59:26.228247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:18.637 [2024-11-20 13:59:26.228255] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:18.637 [2024-11-20 13:59:26.228262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:18.637 [2024-11-20 13:59:26.228268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:18.637 [2024-11-20 13:59:26.228276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.637 [2024-11-20 13:59:26.228283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:18.637 [2024-11-20 13:59:26.228295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.906 ms 00:39:18.637 [2024-11-20 13:59:26.228303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.247451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.637 [2024-11-20 13:59:26.247482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:18.637 [2024-11-20 13:59:26.247491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.164 ms 00:39:18.637 [2024-11-20 13:59:26.247498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.248071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:18.637 [2024-11-20 13:59:26.248086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:18.637 [2024-11-20 13:59:26.248094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.527 ms 00:39:18.637 [2024-11-20 13:59:26.248102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.300194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.637 [2024-11-20 13:59:26.300231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:18.637 [2024-11-20 13:59:26.300242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.637 [2024-11-20 13:59:26.300266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.300364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.637 [2024-11-20 13:59:26.300376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:18.637 [2024-11-20 13:59:26.300384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.637 [2024-11-20 13:59:26.300392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.300443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.637 [2024-11-20 13:59:26.300455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:18.637 [2024-11-20 13:59:26.300463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.637 [2024-11-20 13:59:26.300470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.637 [2024-11-20 13:59:26.300489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.637 [2024-11-20 13:59:26.300496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:18.637 [2024-11-20 13:59:26.300509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.637 [2024-11-20 13:59:26.300517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.419641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.419706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:18.898 [2024-11-20 13:59:26.419730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.419738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.515955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:18.898 [2024-11-20 13:59:26.516037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:18.898 [2024-11-20 13:59:26.516138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:18.898 [2024-11-20 13:59:26.516188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:18.898 [2024-11-20 13:59:26.516326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:18.898 [2024-11-20 13:59:26.516386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:18.898 [2024-11-20 13:59:26.516456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:18.898 [2024-11-20 13:59:26.516518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:18.898 [2024-11-20 13:59:26.516526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:18.898 [2024-11-20 13:59:26.516536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:18.898 [2024-11-20 13:59:26.516679] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 501.966 ms, result 0 00:39:19.863 00:39:19.863 00:39:20.123 13:59:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:39:20.123 13:59:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=79428 00:39:20.123 13:59:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 79428 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79428 ']' 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:20.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:20.123 13:59:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:20.123 [2024-11-20 13:59:27.690934] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:20.123 [2024-11-20 13:59:27.691072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79428 ] 00:39:20.383 [2024-11-20 13:59:27.865841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:20.383 [2024-11-20 13:59:27.975027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:21.322 13:59:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:21.322 13:59:28 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:39:21.322 13:59:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:39:21.322 [2024-11-20 13:59:28.997136] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:21.322 [2024-11-20 13:59:28.997272] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:21.583 [2024-11-20 13:59:29.174274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.174330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:21.583 [2024-11-20 13:59:29.174372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:21.583 [2024-11-20 13:59:29.174381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.177949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.177986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:21.583 [2024-11-20 13:59:29.177998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.556 ms 00:39:21.583 [2024-11-20 13:59:29.178021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.178116] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:21.583 [2024-11-20 13:59:29.179067] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:21.583 [2024-11-20 13:59:29.179113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.179122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:21.583 [2024-11-20 13:59:29.179132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.010 ms 00:39:21.583 [2024-11-20 13:59:29.179139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.180553] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:21.583 [2024-11-20 13:59:29.199063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.199160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:21.583 [2024-11-20 13:59:29.199175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.550 ms 00:39:21.583 [2024-11-20 13:59:29.199186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.199273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.199304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:21.583 [2024-11-20 13:59:29.199314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:39:21.583 [2024-11-20 13:59:29.199323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.205871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.205964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:21.583 [2024-11-20 13:59:29.205993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.510 ms 00:39:21.583 [2024-11-20 13:59:29.206007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.206165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.206183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:21.583 [2024-11-20 13:59:29.206193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:39:21.583 [2024-11-20 13:59:29.206205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.206241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.206254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:21.583 [2024-11-20 13:59:29.206263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:21.583 [2024-11-20 13:59:29.206274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.206302] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:21.583 [2024-11-20 13:59:29.211033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.211061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:21.583 [2024-11-20 13:59:29.211075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.745 ms 00:39:21.583 [2024-11-20 13:59:29.211099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.211172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.583 [2024-11-20 13:59:29.211182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:21.583 [2024-11-20 13:59:29.211195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:21.583 [2024-11-20 13:59:29.211205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.583 [2024-11-20 13:59:29.211230] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:21.583 [2024-11-20 13:59:29.211249] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:21.583 [2024-11-20 13:59:29.211294] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:21.583 [2024-11-20 13:59:29.211312] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:21.583 [2024-11-20 13:59:29.211402] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:21.583 [2024-11-20 13:59:29.211412] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:21.583 [2024-11-20 13:59:29.211430] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:21.583 [2024-11-20 13:59:29.211440] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:21.583 [2024-11-20 13:59:29.211451] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:21.583 [2024-11-20 13:59:29.211459] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:21.583 [2024-11-20 13:59:29.211469] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:21.583 [2024-11-20 13:59:29.211476] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:21.583 [2024-11-20 13:59:29.211489] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:21.583 [2024-11-20 13:59:29.211497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.211507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:21.584 [2024-11-20 13:59:29.211515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:39:21.584 [2024-11-20 13:59:29.211524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.211607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.211618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:21.584 [2024-11-20 13:59:29.211626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:39:21.584 [2024-11-20 13:59:29.211639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.211745] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:21.584 [2024-11-20 13:59:29.211761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:21.584 [2024-11-20 13:59:29.211770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:21.584 [2024-11-20 13:59:29.211783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:21.584 [2024-11-20 13:59:29.211802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:21.584 [2024-11-20 13:59:29.211828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:21.584 [2024-11-20 13:59:29.211835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:21.584 [2024-11-20 13:59:29.211854] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:21.584 [2024-11-20 13:59:29.211865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:21.584 [2024-11-20 13:59:29.211872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:21.584 [2024-11-20 13:59:29.211883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:21.584 [2024-11-20 13:59:29.211891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:21.584 [2024-11-20 13:59:29.211901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:21.584 [2024-11-20 13:59:29.211920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:21.584 [2024-11-20 13:59:29.211927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:21.584 [2024-11-20 13:59:29.211958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:21.584 [2024-11-20 13:59:29.211970] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.584 [2024-11-20 13:59:29.211977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:21.584 [2024-11-20 13:59:29.211993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.584 [2024-11-20 13:59:29.212016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:21.584 [2024-11-20 13:59:29.212022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.584 [2024-11-20 13:59:29.212039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:21.584 [2024-11-20 13:59:29.212048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212054] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:21.584 [2024-11-20 13:59:29.212063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:21.584 [2024-11-20 13:59:29.212070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:21.584 [2024-11-20 13:59:29.212087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:21.584 [2024-11-20 13:59:29.212098] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:21.584 [2024-11-20 13:59:29.212104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:21.584 [2024-11-20 13:59:29.212113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:21.584 [2024-11-20 13:59:29.212120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:21.584 [2024-11-20 13:59:29.212130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:21.584 [2024-11-20 13:59:29.212145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:21.584 [2024-11-20 13:59:29.212152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:21.584 [2024-11-20 13:59:29.212170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:21.584 [2024-11-20 13:59:29.212180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:21.584 [2024-11-20 13:59:29.212187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:21.584 [2024-11-20 13:59:29.212196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:21.584 [2024-11-20 13:59:29.212204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:21.584 [2024-11-20 13:59:29.212213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:21.584 [2024-11-20 13:59:29.212221] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:21.584 [2024-11-20 13:59:29.212230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:21.584 [2024-11-20 13:59:29.212237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:21.584 [2024-11-20 13:59:29.212253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:21.584 [2024-11-20 13:59:29.212263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212280] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:21.584 [2024-11-20 13:59:29.212288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:21.584 [2024-11-20 13:59:29.212301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:21.584 [2024-11-20 13:59:29.212310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:21.584 [2024-11-20 13:59:29.212323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:21.584 [2024-11-20 13:59:29.212330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:21.584 [2024-11-20 13:59:29.212342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:21.584 [2024-11-20 13:59:29.212349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:21.584 [2024-11-20 13:59:29.212361] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:21.584 [2024-11-20 13:59:29.212368] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:21.584 [2024-11-20 13:59:29.212419] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:21.584 [2024-11-20 13:59:29.212428] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:21.584 [2024-11-20 13:59:29.212453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:21.584 [2024-11-20 13:59:29.212464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:21.584 [2024-11-20 13:59:29.212472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:21.584 [2024-11-20 13:59:29.212485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.212493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:21.584 [2024-11-20 13:59:29.212506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.808 ms 00:39:21.584 [2024-11-20 13:59:29.212513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.251794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.251837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:21.584 [2024-11-20 13:59:29.251854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.280 ms 00:39:21.584 [2024-11-20 13:59:29.251868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.252021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.252032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:21.584 [2024-11-20 13:59:29.252045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:39:21.584 [2024-11-20 13:59:29.252052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.298608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.298655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:21.584 [2024-11-20 13:59:29.298673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.615 ms 00:39:21.584 [2024-11-20 13:59:29.298681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.584 [2024-11-20 13:59:29.298816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.584 [2024-11-20 13:59:29.298827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:21.585 [2024-11-20 13:59:29.298840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:21.585 [2024-11-20 13:59:29.298848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.585 [2024-11-20 13:59:29.299276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.585 [2024-11-20 13:59:29.299295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:21.585 [2024-11-20 13:59:29.299312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:39:21.585 [2024-11-20 13:59:29.299321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.585 [2024-11-20 13:59:29.299451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.585 [2024-11-20 13:59:29.299464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:21.585 [2024-11-20 13:59:29.299476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:39:21.585 [2024-11-20 13:59:29.299485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.320842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.320942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:21.845 [2024-11-20 13:59:29.320965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.363 ms 00:39:21.845 [2024-11-20 13:59:29.320974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.354153] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:21.845 [2024-11-20 13:59:29.354189] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:21.845 [2024-11-20 13:59:29.354205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.354230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:21.845 [2024-11-20 13:59:29.354242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.154 ms 00:39:21.845 [2024-11-20 13:59:29.354251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.383386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.383426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:21.845 [2024-11-20 13:59:29.383443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.103 ms 00:39:21.845 [2024-11-20 13:59:29.383468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.401741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.401779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:21.845 [2024-11-20 13:59:29.401801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.215 ms 00:39:21.845 [2024-11-20 13:59:29.401809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.419527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.419560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:21.845 [2024-11-20 13:59:29.419574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.656 ms 00:39:21.845 [2024-11-20 13:59:29.419606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.420431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.420465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:21.845 [2024-11-20 13:59:29.420480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.720 ms 00:39:21.845 [2024-11-20 13:59:29.420487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.503569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.503677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:21.845 [2024-11-20 13:59:29.503698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.204 ms 00:39:21.845 [2024-11-20 13:59:29.503707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.514766] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:21.845 [2024-11-20 13:59:29.530722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.530810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:21.845 [2024-11-20 13:59:29.530831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.902 ms 00:39:21.845 [2024-11-20 13:59:29.530844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.530961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.530978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:21.845 [2024-11-20 13:59:29.530988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:21.845 [2024-11-20 13:59:29.531000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.531060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.531074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:21.845 [2024-11-20 13:59:29.531082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:39:21.845 [2024-11-20 13:59:29.531100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.531124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.531138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:21.845 [2024-11-20 13:59:29.531146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:21.845 [2024-11-20 13:59:29.531159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:21.845 [2024-11-20 13:59:29.531201] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:21.845 [2024-11-20 13:59:29.531222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:21.845 [2024-11-20 13:59:29.531230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:21.845 [2024-11-20 13:59:29.531250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:21.845 [2024-11-20 13:59:29.531257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:22.104 [2024-11-20 13:59:29.567840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:22.104 [2024-11-20 13:59:29.567881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:22.104 [2024-11-20 13:59:29.567898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.614 ms 00:39:22.105 [2024-11-20 13:59:29.567907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:22.105 [2024-11-20 13:59:29.568028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:22.105 [2024-11-20 13:59:29.568040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:22.105 [2024-11-20 13:59:29.568054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:39:22.105 [2024-11-20 13:59:29.568068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:22.105 [2024-11-20 13:59:29.569086] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:22.105 [2024-11-20 13:59:29.573381] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 395.186 ms, result 0 00:39:22.105 [2024-11-20 13:59:29.574568] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:22.105 Some configs were skipped because the RPC state that can call them passed over. 00:39:22.105 13:59:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:39:22.105 [2024-11-20 13:59:29.813612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:22.105 [2024-11-20 13:59:29.813804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:22.105 [2024-11-20 13:59:29.813854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.486 ms 00:39:22.105 [2024-11-20 13:59:29.813890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:22.105 [2024-11-20 13:59:29.813961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.839 ms, result 0 00:39:22.105 true 00:39:22.364 13:59:29 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:39:22.364 [2024-11-20 13:59:30.022235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:22.364 [2024-11-20 13:59:30.022380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:22.364 [2024-11-20 13:59:30.022428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.443 ms 00:39:22.364 [2024-11-20 13:59:30.022457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:22.364 [2024-11-20 13:59:30.022526] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.744 ms, result 0 00:39:22.364 true 00:39:22.364 13:59:30 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 79428 00:39:22.364 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79428 ']' 00:39:22.364 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79428 00:39:22.364 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:39:22.364 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:22.364 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79428 00:39:22.624 killing process with pid 79428 00:39:22.624 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:22.624 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:22.624 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79428' 00:39:22.624 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79428 00:39:22.624 13:59:30 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79428 00:39:23.564 [2024-11-20 13:59:31.173223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.564 [2024-11-20 13:59:31.173367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:23.564 [2024-11-20 13:59:31.173415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:23.564 [2024-11-20 13:59:31.173437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.564 [2024-11-20 13:59:31.173477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:23.564 [2024-11-20 13:59:31.177693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.564 [2024-11-20 13:59:31.177764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:23.564 [2024-11-20 13:59:31.177797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.183 ms 00:39:23.564 [2024-11-20 13:59:31.177817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.178102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.178147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:23.565 [2024-11-20 13:59:31.178178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:39:23.565 [2024-11-20 13:59:31.178211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.181565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.181631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:23.565 [2024-11-20 13:59:31.181664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.326 ms 00:39:23.565 [2024-11-20 13:59:31.181685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.187203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.187264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:23.565 [2024-11-20 13:59:31.187293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.465 ms 00:39:23.565 [2024-11-20 13:59:31.187312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.203176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.203393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:23.565 [2024-11-20 13:59:31.203436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.777 ms 00:39:23.565 [2024-11-20 13:59:31.203471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.213643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.213727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:23.565 [2024-11-20 13:59:31.213760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.084 ms 00:39:23.565 [2024-11-20 13:59:31.213780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.213962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.213993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:23.565 [2024-11-20 13:59:31.214016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:39:23.565 [2024-11-20 13:59:31.214040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.229290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.229352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:23.565 [2024-11-20 13:59:31.229387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.247 ms 00:39:23.565 [2024-11-20 13:59:31.229406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.243920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.243981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:23.565 [2024-11-20 13:59:31.244019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.477 ms 00:39:23.565 [2024-11-20 13:59:31.244038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.258031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.258090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:23.565 [2024-11-20 13:59:31.258124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.953 ms 00:39:23.565 [2024-11-20 13:59:31.258132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.271930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.565 [2024-11-20 13:59:31.271959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:23.565 [2024-11-20 13:59:31.271974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.754 ms 00:39:23.565 [2024-11-20 13:59:31.271997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.565 [2024-11-20 13:59:31.272035] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:23.565 [2024-11-20 13:59:31.272050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:23.565 [2024-11-20 13:59:31.272594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.272994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:23.566 [2024-11-20 13:59:31.273083] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:23.566 [2024-11-20 13:59:31.273107] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:23.566 [2024-11-20 13:59:31.273129] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:23.566 [2024-11-20 13:59:31.273148] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:23.566 [2024-11-20 13:59:31.273155] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:23.566 [2024-11-20 13:59:31.273168] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:23.566 [2024-11-20 13:59:31.273175] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:23.566 [2024-11-20 13:59:31.273188] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:23.566 [2024-11-20 13:59:31.273195] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:23.566 [2024-11-20 13:59:31.273206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:23.566 [2024-11-20 13:59:31.273212] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:23.566 [2024-11-20 13:59:31.273224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.566 [2024-11-20 13:59:31.273233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:23.566 [2024-11-20 13:59:31.273245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.194 ms 00:39:23.566 [2024-11-20 13:59:31.273253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.293075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.827 [2024-11-20 13:59:31.293107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:23.827 [2024-11-20 13:59:31.293127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.814 ms 00:39:23.827 [2024-11-20 13:59:31.293135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.293664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:23.827 [2024-11-20 13:59:31.293674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:23.827 [2024-11-20 13:59:31.293687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.479 ms 00:39:23.827 [2024-11-20 13:59:31.293699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.359956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:23.827 [2024-11-20 13:59:31.359999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:23.827 [2024-11-20 13:59:31.360014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:23.827 [2024-11-20 13:59:31.360038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.360155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:23.827 [2024-11-20 13:59:31.360164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:23.827 [2024-11-20 13:59:31.360177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:23.827 [2024-11-20 13:59:31.360189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.360248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:23.827 [2024-11-20 13:59:31.360259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:23.827 [2024-11-20 13:59:31.360274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:23.827 [2024-11-20 13:59:31.360283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.360306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:23.827 [2024-11-20 13:59:31.360315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:23.827 [2024-11-20 13:59:31.360327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:23.827 [2024-11-20 13:59:31.360334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:23.827 [2024-11-20 13:59:31.480681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:23.827 [2024-11-20 13:59:31.480764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:23.827 [2024-11-20 13:59:31.480798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:23.827 [2024-11-20 13:59:31.480806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.087 [2024-11-20 13:59:31.580086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.087 [2024-11-20 13:59:31.580147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:24.087 [2024-11-20 13:59:31.580179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.087 [2024-11-20 13:59:31.580192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.087 [2024-11-20 13:59:31.580288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.087 [2024-11-20 13:59:31.580299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:24.087 [2024-11-20 13:59:31.580314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.087 [2024-11-20 13:59:31.580322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.087 [2024-11-20 13:59:31.580352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.087 [2024-11-20 13:59:31.580360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:24.087 [2024-11-20 13:59:31.580370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.087 [2024-11-20 13:59:31.580378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.087 [2024-11-20 13:59:31.580485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.087 [2024-11-20 13:59:31.580497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:24.087 [2024-11-20 13:59:31.580506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.087 [2024-11-20 13:59:31.580514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.087 [2024-11-20 13:59:31.580557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.087 [2024-11-20 13:59:31.580567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:24.088 [2024-11-20 13:59:31.580578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.088 [2024-11-20 13:59:31.580584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.088 [2024-11-20 13:59:31.580632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.088 [2024-11-20 13:59:31.580641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:24.088 [2024-11-20 13:59:31.580653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.088 [2024-11-20 13:59:31.580661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.088 [2024-11-20 13:59:31.580757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:24.088 [2024-11-20 13:59:31.580770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:24.088 [2024-11-20 13:59:31.580780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:24.088 [2024-11-20 13:59:31.580788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:24.088 [2024-11-20 13:59:31.580961] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 408.491 ms, result 0 00:39:25.027 13:59:32 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:25.027 13:59:32 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:25.027 [2024-11-20 13:59:32.644792] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:25.027 [2024-11-20 13:59:32.644925] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79492 ] 00:39:25.288 [2024-11-20 13:59:32.821534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.288 [2024-11-20 13:59:32.936665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.859 [2024-11-20 13:59:33.285194] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:25.859 [2024-11-20 13:59:33.285341] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:25.859 [2024-11-20 13:59:33.442741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.859 [2024-11-20 13:59:33.442793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:25.859 [2024-11-20 13:59:33.442805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:25.859 [2024-11-20 13:59:33.442813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.859 [2024-11-20 13:59:33.445647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.859 [2024-11-20 13:59:33.445742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:25.859 [2024-11-20 13:59:33.445757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.823 ms 00:39:25.859 [2024-11-20 13:59:33.445765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.859 [2024-11-20 13:59:33.445866] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:25.859 [2024-11-20 13:59:33.446797] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:25.859 [2024-11-20 13:59:33.446827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.859 [2024-11-20 13:59:33.446837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:25.859 [2024-11-20 13:59:33.446845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:39:25.859 [2024-11-20 13:59:33.446852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.859 [2024-11-20 13:59:33.448275] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:25.859 [2024-11-20 13:59:33.465953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.859 [2024-11-20 13:59:33.465992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:25.859 [2024-11-20 13:59:33.466004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.713 ms 00:39:25.859 [2024-11-20 13:59:33.466013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.859 [2024-11-20 13:59:33.466102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.859 [2024-11-20 13:59:33.466114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:25.859 [2024-11-20 13:59:33.466123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:39:25.860 [2024-11-20 13:59:33.466130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.472606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.472675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:25.860 [2024-11-20 13:59:33.472687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.450 ms 00:39:25.860 [2024-11-20 13:59:33.472710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.472823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.472837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:25.860 [2024-11-20 13:59:33.472847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:39:25.860 [2024-11-20 13:59:33.472854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.472883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.472895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:25.860 [2024-11-20 13:59:33.472902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:25.860 [2024-11-20 13:59:33.472910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.472933] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:25.860 [2024-11-20 13:59:33.477463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.477490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:25.860 [2024-11-20 13:59:33.477500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:39:25.860 [2024-11-20 13:59:33.477507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.477565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.477575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:25.860 [2024-11-20 13:59:33.477584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:25.860 [2024-11-20 13:59:33.477591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.477609] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:25.860 [2024-11-20 13:59:33.477631] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:25.860 [2024-11-20 13:59:33.477664] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:25.860 [2024-11-20 13:59:33.477679] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:25.860 [2024-11-20 13:59:33.477803] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:25.860 [2024-11-20 13:59:33.477814] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:25.860 [2024-11-20 13:59:33.477825] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:25.860 [2024-11-20 13:59:33.477835] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:25.860 [2024-11-20 13:59:33.477847] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:25.860 [2024-11-20 13:59:33.477864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:25.860 [2024-11-20 13:59:33.477872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:25.860 [2024-11-20 13:59:33.477880] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:25.860 [2024-11-20 13:59:33.477887] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:25.860 [2024-11-20 13:59:33.477895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.477902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:25.860 [2024-11-20 13:59:33.477909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:39:25.860 [2024-11-20 13:59:33.477916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.477989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.860 [2024-11-20 13:59:33.478001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:25.860 [2024-11-20 13:59:33.478008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:25.860 [2024-11-20 13:59:33.478016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.860 [2024-11-20 13:59:33.478103] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:25.860 [2024-11-20 13:59:33.478114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:25.860 [2024-11-20 13:59:33.478122] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:25.860 [2024-11-20 13:59:33.478145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:25.860 [2024-11-20 13:59:33.478168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:25.860 [2024-11-20 13:59:33.478181] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:25.860 [2024-11-20 13:59:33.478188] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:25.860 [2024-11-20 13:59:33.478194] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:25.860 [2024-11-20 13:59:33.478214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:25.860 [2024-11-20 13:59:33.478221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:25.860 [2024-11-20 13:59:33.478228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:25.860 [2024-11-20 13:59:33.478242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:25.860 [2024-11-20 13:59:33.478262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:25.860 [2024-11-20 13:59:33.478282] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:25.860 [2024-11-20 13:59:33.478302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:25.860 [2024-11-20 13:59:33.478321] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478327] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:25.860 [2024-11-20 13:59:33.478333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:25.860 [2024-11-20 13:59:33.478339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:25.860 [2024-11-20 13:59:33.478352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:25.860 [2024-11-20 13:59:33.478359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:25.860 [2024-11-20 13:59:33.478366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:25.860 [2024-11-20 13:59:33.478372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:25.860 [2024-11-20 13:59:33.478379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:25.860 [2024-11-20 13:59:33.478385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:25.860 [2024-11-20 13:59:33.478398] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:25.860 [2024-11-20 13:59:33.478403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.860 [2024-11-20 13:59:33.478409] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:25.860 [2024-11-20 13:59:33.478417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:25.861 [2024-11-20 13:59:33.478424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:25.861 [2024-11-20 13:59:33.478435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:25.861 [2024-11-20 13:59:33.478442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:25.861 [2024-11-20 13:59:33.478449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:25.861 [2024-11-20 13:59:33.478456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:25.861 [2024-11-20 13:59:33.478462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:25.861 [2024-11-20 13:59:33.478468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:25.861 [2024-11-20 13:59:33.478475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:25.861 [2024-11-20 13:59:33.478484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:25.861 [2024-11-20 13:59:33.478494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:25.861 [2024-11-20 13:59:33.478510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:25.861 [2024-11-20 13:59:33.478518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:25.861 [2024-11-20 13:59:33.478526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:25.861 [2024-11-20 13:59:33.478533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:25.861 [2024-11-20 13:59:33.478540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:25.861 [2024-11-20 13:59:33.478547] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:25.861 [2024-11-20 13:59:33.478553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:25.861 [2024-11-20 13:59:33.478560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:25.861 [2024-11-20 13:59:33.478567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:25.861 [2024-11-20 13:59:33.478600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:25.861 [2024-11-20 13:59:33.478610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478618] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:25.861 [2024-11-20 13:59:33.478625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:25.861 [2024-11-20 13:59:33.478632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:25.861 [2024-11-20 13:59:33.478639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:25.861 [2024-11-20 13:59:33.478646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.478653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:25.861 [2024-11-20 13:59:33.478664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:39:25.861 [2024-11-20 13:59:33.478671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.518029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.518070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:25.861 [2024-11-20 13:59:33.518084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.366 ms 00:39:25.861 [2024-11-20 13:59:33.518093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.518218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.518233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:25.861 [2024-11-20 13:59:33.518241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:39:25.861 [2024-11-20 13:59:33.518249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.573789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.573829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:25.861 [2024-11-20 13:59:33.573840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.625 ms 00:39:25.861 [2024-11-20 13:59:33.573851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.573954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.573965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:25.861 [2024-11-20 13:59:33.573974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:39:25.861 [2024-11-20 13:59:33.573981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.574395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.574406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:25.861 [2024-11-20 13:59:33.574414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:39:25.861 [2024-11-20 13:59:33.574428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:25.861 [2024-11-20 13:59:33.574534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:25.861 [2024-11-20 13:59:33.574546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:25.861 [2024-11-20 13:59:33.574555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:39:25.861 [2024-11-20 13:59:33.574562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.593305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.593413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:26.132 [2024-11-20 13:59:33.593428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.758 ms 00:39:26.132 [2024-11-20 13:59:33.593436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.611974] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:26.132 [2024-11-20 13:59:33.612046] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:26.132 [2024-11-20 13:59:33.612059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.612083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:26.132 [2024-11-20 13:59:33.612092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:39:26.132 [2024-11-20 13:59:33.612101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.640068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.640130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:26.132 [2024-11-20 13:59:33.640141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.950 ms 00:39:26.132 [2024-11-20 13:59:33.640149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.657213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.657297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:26.132 [2024-11-20 13:59:33.657312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.025 ms 00:39:26.132 [2024-11-20 13:59:33.657319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.675917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.675953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:26.132 [2024-11-20 13:59:33.675965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.568 ms 00:39:26.132 [2024-11-20 13:59:33.675972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.676685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.676701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:26.132 [2024-11-20 13:59:33.676710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.601 ms 00:39:26.132 [2024-11-20 13:59:33.676740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.761134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.761315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:26.132 [2024-11-20 13:59:33.761335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.526 ms 00:39:26.132 [2024-11-20 13:59:33.761344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.775133] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:26.132 [2024-11-20 13:59:33.791666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.791865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:26.132 [2024-11-20 13:59:33.791885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.196 ms 00:39:26.132 [2024-11-20 13:59:33.791900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.792051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.792063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:26.132 [2024-11-20 13:59:33.792072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:26.132 [2024-11-20 13:59:33.792080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.132 [2024-11-20 13:59:33.792140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.132 [2024-11-20 13:59:33.792148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:26.133 [2024-11-20 13:59:33.792156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:26.133 [2024-11-20 13:59:33.792163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.133 [2024-11-20 13:59:33.792204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.133 [2024-11-20 13:59:33.792218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:26.133 [2024-11-20 13:59:33.792226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:39:26.133 [2024-11-20 13:59:33.792233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.133 [2024-11-20 13:59:33.792270] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:26.133 [2024-11-20 13:59:33.792280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.133 [2024-11-20 13:59:33.792288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:26.133 [2024-11-20 13:59:33.792295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:26.133 [2024-11-20 13:59:33.792303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.133 [2024-11-20 13:59:33.830302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.133 [2024-11-20 13:59:33.830376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:26.133 [2024-11-20 13:59:33.830392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.049 ms 00:39:26.133 [2024-11-20 13:59:33.830402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.133 [2024-11-20 13:59:33.830616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:26.133 [2024-11-20 13:59:33.830629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:26.133 [2024-11-20 13:59:33.830638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:39:26.133 [2024-11-20 13:59:33.830647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:26.133 [2024-11-20 13:59:33.831754] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:26.133 [2024-11-20 13:59:33.836934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.383 ms, result 0 00:39:26.133 [2024-11-20 13:59:33.837988] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:26.406 [2024-11-20 13:59:33.857827] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:27.344  [2024-11-20T13:59:36.002Z] Copying: 33/256 [MB] (33 MBps) [2024-11-20T13:59:36.941Z] Copying: 64/256 [MB] (30 MBps) [2024-11-20T13:59:37.879Z] Copying: 95/256 [MB] (30 MBps) [2024-11-20T13:59:39.259Z] Copying: 124/256 [MB] (29 MBps) [2024-11-20T13:59:40.198Z] Copying: 154/256 [MB] (29 MBps) [2024-11-20T13:59:41.163Z] Copying: 184/256 [MB] (29 MBps) [2024-11-20T13:59:42.102Z] Copying: 215/256 [MB] (30 MBps) [2024-11-20T13:59:42.363Z] Copying: 246/256 [MB] (31 MBps) [2024-11-20T13:59:42.363Z] Copying: 256/256 [MB] (average 30 MBps)[2024-11-20 13:59:42.143777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:34.644 [2024-11-20 13:59:42.158085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.158160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:34.644 [2024-11-20 13:59:42.158176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:34.644 [2024-11-20 13:59:42.158210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.158232] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:34.644 [2024-11-20 13:59:42.162327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.162353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:34.644 [2024-11-20 13:59:42.162362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.090 ms 00:39:34.644 [2024-11-20 13:59:42.162369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.162603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.162613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:34.644 [2024-11-20 13:59:42.162621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.198 ms 00:39:34.644 [2024-11-20 13:59:42.162628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.165398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.165468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:34.644 [2024-11-20 13:59:42.165480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:39:34.644 [2024-11-20 13:59:42.165487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.170801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.170825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:34.644 [2024-11-20 13:59:42.170834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.306 ms 00:39:34.644 [2024-11-20 13:59:42.170840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.205419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.205454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:34.644 [2024-11-20 13:59:42.205465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.569 ms 00:39:34.644 [2024-11-20 13:59:42.205472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.225529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.225571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:34.644 [2024-11-20 13:59:42.225587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.062 ms 00:39:34.644 [2024-11-20 13:59:42.225595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.225713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.225736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:34.644 [2024-11-20 13:59:42.225744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:39:34.644 [2024-11-20 13:59:42.225751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.260477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.260510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:34.644 [2024-11-20 13:59:42.260520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.759 ms 00:39:34.644 [2024-11-20 13:59:42.260527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.295295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.295329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:34.644 [2024-11-20 13:59:42.295339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.787 ms 00:39:34.644 [2024-11-20 13:59:42.295345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.644 [2024-11-20 13:59:42.330270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.644 [2024-11-20 13:59:42.330303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:34.644 [2024-11-20 13:59:42.330315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.938 ms 00:39:34.644 [2024-11-20 13:59:42.330322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.905 [2024-11-20 13:59:42.365272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.905 [2024-11-20 13:59:42.365308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:34.905 [2024-11-20 13:59:42.365319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.955 ms 00:39:34.905 [2024-11-20 13:59:42.365325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.905 [2024-11-20 13:59:42.365358] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:34.905 [2024-11-20 13:59:42.365372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:34.905 [2024-11-20 13:59:42.365517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.365999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:34.906 [2024-11-20 13:59:42.366142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:34.906 [2024-11-20 13:59:42.366149] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:34.906 [2024-11-20 13:59:42.366157] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:34.906 [2024-11-20 13:59:42.366164] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:34.906 [2024-11-20 13:59:42.366172] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:34.906 [2024-11-20 13:59:42.366180] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:34.906 [2024-11-20 13:59:42.366187] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:34.906 [2024-11-20 13:59:42.366196] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:34.906 [2024-11-20 13:59:42.366204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:34.906 [2024-11-20 13:59:42.366211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:34.906 [2024-11-20 13:59:42.366217] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:34.906 [2024-11-20 13:59:42.366225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.907 [2024-11-20 13:59:42.366239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:34.907 [2024-11-20 13:59:42.366247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:39:34.907 [2024-11-20 13:59:42.366255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.385424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.907 [2024-11-20 13:59:42.385456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:34.907 [2024-11-20 13:59:42.385467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.187 ms 00:39:34.907 [2024-11-20 13:59:42.385475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.386090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:34.907 [2024-11-20 13:59:42.386108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:34.907 [2024-11-20 13:59:42.386116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:39:34.907 [2024-11-20 13:59:42.386124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.440122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:34.907 [2024-11-20 13:59:42.440158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:34.907 [2024-11-20 13:59:42.440169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:34.907 [2024-11-20 13:59:42.440177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.440278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:34.907 [2024-11-20 13:59:42.440288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:34.907 [2024-11-20 13:59:42.440296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:34.907 [2024-11-20 13:59:42.440303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.440356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:34.907 [2024-11-20 13:59:42.440374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:34.907 [2024-11-20 13:59:42.440381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:34.907 [2024-11-20 13:59:42.440388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.440406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:34.907 [2024-11-20 13:59:42.440417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:34.907 [2024-11-20 13:59:42.440425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:34.907 [2024-11-20 13:59:42.440433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:34.907 [2024-11-20 13:59:42.559469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:34.907 [2024-11-20 13:59:42.559522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:34.907 [2024-11-20 13:59:42.559534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:34.907 [2024-11-20 13:59:42.559542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.166 [2024-11-20 13:59:42.656495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.166 [2024-11-20 13:59:42.656616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:35.166 [2024-11-20 13:59:42.656646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.166 [2024-11-20 13:59:42.656666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.166 [2024-11-20 13:59:42.656767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.166 [2024-11-20 13:59:42.656796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:35.166 [2024-11-20 13:59:42.656832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.166 [2024-11-20 13:59:42.656851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.166 [2024-11-20 13:59:42.656906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.166 [2024-11-20 13:59:42.656947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:35.166 [2024-11-20 13:59:42.656963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.166 [2024-11-20 13:59:42.656970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.166 [2024-11-20 13:59:42.657082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.166 [2024-11-20 13:59:42.657096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:35.166 [2024-11-20 13:59:42.657103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.166 [2024-11-20 13:59:42.657111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.166 [2024-11-20 13:59:42.657148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.166 [2024-11-20 13:59:42.657159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:35.167 [2024-11-20 13:59:42.657167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.167 [2024-11-20 13:59:42.657177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.167 [2024-11-20 13:59:42.657217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.167 [2024-11-20 13:59:42.657225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:35.167 [2024-11-20 13:59:42.657232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.167 [2024-11-20 13:59:42.657239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.167 [2024-11-20 13:59:42.657284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:35.167 [2024-11-20 13:59:42.657293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:35.167 [2024-11-20 13:59:42.657304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:35.167 [2024-11-20 13:59:42.657311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:35.167 [2024-11-20 13:59:42.657451] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 500.313 ms, result 0 00:39:36.104 00:39:36.104 00:39:36.104 13:59:43 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:39:36.105 13:59:43 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:36.673 13:59:44 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:36.674 [2024-11-20 13:59:44.266205] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:36.674 [2024-11-20 13:59:44.266411] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79613 ] 00:39:36.934 [2024-11-20 13:59:44.442845] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:36.934 [2024-11-20 13:59:44.551066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:37.193 [2024-11-20 13:59:44.891945] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:37.194 [2024-11-20 13:59:44.892013] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:37.455 [2024-11-20 13:59:45.049463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.455 [2024-11-20 13:59:45.049584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:37.455 [2024-11-20 13:59:45.049600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:37.455 [2024-11-20 13:59:45.049607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.455 [2024-11-20 13:59:45.052634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.455 [2024-11-20 13:59:45.052729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:37.455 [2024-11-20 13:59:45.052747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.013 ms 00:39:37.455 [2024-11-20 13:59:45.052757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.455 [2024-11-20 13:59:45.052852] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:37.455 [2024-11-20 13:59:45.053753] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:37.455 [2024-11-20 13:59:45.053784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.455 [2024-11-20 13:59:45.053793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:37.455 [2024-11-20 13:59:45.053801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.943 ms 00:39:37.455 [2024-11-20 13:59:45.053807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.455 [2024-11-20 13:59:45.055206] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:37.455 [2024-11-20 13:59:45.073052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.073089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:37.456 [2024-11-20 13:59:45.073099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.882 ms 00:39:37.456 [2024-11-20 13:59:45.073106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.073190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.073201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:37.456 [2024-11-20 13:59:45.073209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:39:37.456 [2024-11-20 13:59:45.073216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.079609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.079693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:37.456 [2024-11-20 13:59:45.079705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.369 ms 00:39:37.456 [2024-11-20 13:59:45.079729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.079838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.079851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:37.456 [2024-11-20 13:59:45.079860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:37.456 [2024-11-20 13:59:45.079868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.079897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.079909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:37.456 [2024-11-20 13:59:45.079916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:37.456 [2024-11-20 13:59:45.079923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.079946] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:37.456 [2024-11-20 13:59:45.084217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.084245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:37.456 [2024-11-20 13:59:45.084254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.287 ms 00:39:37.456 [2024-11-20 13:59:45.084261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.084334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.084343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:37.456 [2024-11-20 13:59:45.084351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:37.456 [2024-11-20 13:59:45.084358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.084376] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:37.456 [2024-11-20 13:59:45.084403] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:37.456 [2024-11-20 13:59:45.084437] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:37.456 [2024-11-20 13:59:45.084452] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:37.456 [2024-11-20 13:59:45.084538] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:37.456 [2024-11-20 13:59:45.084548] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:37.456 [2024-11-20 13:59:45.084558] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:37.456 [2024-11-20 13:59:45.084567] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:37.456 [2024-11-20 13:59:45.084580] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:37.456 [2024-11-20 13:59:45.084588] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:37.456 [2024-11-20 13:59:45.084595] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:37.456 [2024-11-20 13:59:45.084603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:37.456 [2024-11-20 13:59:45.084610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:37.456 [2024-11-20 13:59:45.084617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.084624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:37.456 [2024-11-20 13:59:45.084631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.245 ms 00:39:37.456 [2024-11-20 13:59:45.084638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.084708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.456 [2024-11-20 13:59:45.084719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:37.456 [2024-11-20 13:59:45.084727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:39:37.456 [2024-11-20 13:59:45.084750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.456 [2024-11-20 13:59:45.084837] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:37.456 [2024-11-20 13:59:45.084847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:37.456 [2024-11-20 13:59:45.084855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:37.456 [2024-11-20 13:59:45.084863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.084871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:37.456 [2024-11-20 13:59:45.084878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.084886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:37.456 [2024-11-20 13:59:45.084893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:37.456 [2024-11-20 13:59:45.084901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:37.456 [2024-11-20 13:59:45.084907] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:37.456 [2024-11-20 13:59:45.084914] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:37.456 [2024-11-20 13:59:45.084921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:37.456 [2024-11-20 13:59:45.084927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:37.456 [2024-11-20 13:59:45.084948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:37.456 [2024-11-20 13:59:45.084955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:37.456 [2024-11-20 13:59:45.084962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.084969] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:37.456 [2024-11-20 13:59:45.084975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:37.456 [2024-11-20 13:59:45.084982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.084988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:37.456 [2024-11-20 13:59:45.084994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:37.456 [2024-11-20 13:59:45.085013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:37.456 [2024-11-20 13:59:45.085031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:37.456 [2024-11-20 13:59:45.085049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:37.456 [2024-11-20 13:59:45.085067] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085073] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:37.456 [2024-11-20 13:59:45.085079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:37.456 [2024-11-20 13:59:45.085085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:37.456 [2024-11-20 13:59:45.085091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:37.456 [2024-11-20 13:59:45.085097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:37.456 [2024-11-20 13:59:45.085104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:37.456 [2024-11-20 13:59:45.085110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:37.456 [2024-11-20 13:59:45.085124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:37.456 [2024-11-20 13:59:45.085130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085136] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:37.456 [2024-11-20 13:59:45.085142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:37.456 [2024-11-20 13:59:45.085149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:37.456 [2024-11-20 13:59:45.085166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:37.456 [2024-11-20 13:59:45.085173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:37.456 [2024-11-20 13:59:45.085180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:37.456 [2024-11-20 13:59:45.085187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:37.456 [2024-11-20 13:59:45.085193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:37.456 [2024-11-20 13:59:45.085200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:37.456 [2024-11-20 13:59:45.085208] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:37.456 [2024-11-20 13:59:45.085217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:37.457 [2024-11-20 13:59:45.085232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:37.457 [2024-11-20 13:59:45.085240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:37.457 [2024-11-20 13:59:45.085247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:37.457 [2024-11-20 13:59:45.085254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:37.457 [2024-11-20 13:59:45.085261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:37.457 [2024-11-20 13:59:45.085267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:37.457 [2024-11-20 13:59:45.085274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:37.457 [2024-11-20 13:59:45.085281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:37.457 [2024-11-20 13:59:45.085289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085295] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085316] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:37.457 [2024-11-20 13:59:45.085323] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:37.457 [2024-11-20 13:59:45.085330] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085338] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:37.457 [2024-11-20 13:59:45.085345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:37.457 [2024-11-20 13:59:45.085352] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:37.457 [2024-11-20 13:59:45.085359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:37.457 [2024-11-20 13:59:45.085366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.457 [2024-11-20 13:59:45.085373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:37.457 [2024-11-20 13:59:45.085383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:39:37.457 [2024-11-20 13:59:45.085391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.457 [2024-11-20 13:59:45.120274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.457 [2024-11-20 13:59:45.120320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:37.457 [2024-11-20 13:59:45.120332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.893 ms 00:39:37.457 [2024-11-20 13:59:45.120356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.457 [2024-11-20 13:59:45.120499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.457 [2024-11-20 13:59:45.120515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:37.457 [2024-11-20 13:59:45.120523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:39:37.457 [2024-11-20 13:59:45.120530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.717 [2024-11-20 13:59:45.193104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.717 [2024-11-20 13:59:45.193147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:37.718 [2024-11-20 13:59:45.193158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.691 ms 00:39:37.718 [2024-11-20 13:59:45.193169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.193272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.193282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:37.718 [2024-11-20 13:59:45.193290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:37.718 [2024-11-20 13:59:45.193297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.193717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.193729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:37.718 [2024-11-20 13:59:45.193754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:39:37.718 [2024-11-20 13:59:45.193767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.193883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.193896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:37.718 [2024-11-20 13:59:45.193905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:39:37.718 [2024-11-20 13:59:45.193912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.212110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.212148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:37.718 [2024-11-20 13:59:45.212159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.209 ms 00:39:37.718 [2024-11-20 13:59:45.212167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.230869] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:39:37.718 [2024-11-20 13:59:45.230904] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:37.718 [2024-11-20 13:59:45.230915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.230922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:37.718 [2024-11-20 13:59:45.230931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.669 ms 00:39:37.718 [2024-11-20 13:59:45.230938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.257981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.258026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:37.718 [2024-11-20 13:59:45.258036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.022 ms 00:39:37.718 [2024-11-20 13:59:45.258059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.275015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.275047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:37.718 [2024-11-20 13:59:45.275056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.920 ms 00:39:37.718 [2024-11-20 13:59:45.275063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.291879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.291942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:37.718 [2024-11-20 13:59:45.291985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.786 ms 00:39:37.718 [2024-11-20 13:59:45.292004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.292691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.292766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:37.718 [2024-11-20 13:59:45.292799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:39:37.718 [2024-11-20 13:59:45.292819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.374461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.374607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:37.718 [2024-11-20 13:59:45.374680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.747 ms 00:39:37.718 [2024-11-20 13:59:45.374702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.385752] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:37.718 [2024-11-20 13:59:45.401860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.401961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:37.718 [2024-11-20 13:59:45.401998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.039 ms 00:39:37.718 [2024-11-20 13:59:45.402032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.402165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.402203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:37.718 [2024-11-20 13:59:45.402232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:37.718 [2024-11-20 13:59:45.402258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.402349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.402381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:37.718 [2024-11-20 13:59:45.402409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:37.718 [2024-11-20 13:59:45.402435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.402501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.402538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:37.718 [2024-11-20 13:59:45.402567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:39:37.718 [2024-11-20 13:59:45.402592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.718 [2024-11-20 13:59:45.402632] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:37.718 [2024-11-20 13:59:45.402642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.718 [2024-11-20 13:59:45.402650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:37.718 [2024-11-20 13:59:45.402659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:39:37.718 [2024-11-20 13:59:45.402666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.437549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.437588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:37.979 [2024-11-20 13:59:45.437599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.930 ms 00:39:37.979 [2024-11-20 13:59:45.437607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.437750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.437762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:37.979 [2024-11-20 13:59:45.437771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:37.979 [2024-11-20 13:59:45.437778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.438695] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:37.979 [2024-11-20 13:59:45.442887] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 389.683 ms, result 0 00:39:37.979 [2024-11-20 13:59:45.443727] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:37.979 [2024-11-20 13:59:45.461318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:37.979  [2024-11-20T13:59:45.698Z] Copying: 4096/4096 [kB] (average 26 MBps)[2024-11-20 13:59:45.617029] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:37.979 [2024-11-20 13:59:45.631365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.631403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:37.979 [2024-11-20 13:59:45.631416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:37.979 [2024-11-20 13:59:45.631444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.631465] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:37.979 [2024-11-20 13:59:45.635577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.635660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:37.979 [2024-11-20 13:59:45.635674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.108 ms 00:39:37.979 [2024-11-20 13:59:45.635681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.637642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.637677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:37.979 [2024-11-20 13:59:45.637689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.940 ms 00:39:37.979 [2024-11-20 13:59:45.637696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.640979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.641016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:37.979 [2024-11-20 13:59:45.641026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.273 ms 00:39:37.979 [2024-11-20 13:59:45.641034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.646435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.646500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:37.979 [2024-11-20 13:59:45.646512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.384 ms 00:39:37.979 [2024-11-20 13:59:45.646535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:37.979 [2024-11-20 13:59:45.681840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:37.979 [2024-11-20 13:59:45.681887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:37.979 [2024-11-20 13:59:45.681899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.306 ms 00:39:37.979 [2024-11-20 13:59:45.681922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.702852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.702906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:38.240 [2024-11-20 13:59:45.702923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.910 ms 00:39:38.240 [2024-11-20 13:59:45.702930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.703067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.703077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:38.240 [2024-11-20 13:59:45.703086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:39:38.240 [2024-11-20 13:59:45.703094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.738510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.738549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:38.240 [2024-11-20 13:59:45.738559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.454 ms 00:39:38.240 [2024-11-20 13:59:45.738582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.773286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.773373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:38.240 [2024-11-20 13:59:45.773388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.699 ms 00:39:38.240 [2024-11-20 13:59:45.773396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.808076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.808126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:38.240 [2024-11-20 13:59:45.808139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.696 ms 00:39:38.240 [2024-11-20 13:59:45.808164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.240 [2024-11-20 13:59:45.843618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.240 [2024-11-20 13:59:45.843688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:38.240 [2024-11-20 13:59:45.843699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.422 ms 00:39:38.241 [2024-11-20 13:59:45.843707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.241 [2024-11-20 13:59:45.843802] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:38.241 [2024-11-20 13:59:45.843819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.843998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:38.241 [2024-11-20 13:59:45.844334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:38.242 [2024-11-20 13:59:45.844591] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:38.242 [2024-11-20 13:59:45.844599] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:38.242 [2024-11-20 13:59:45.844606] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:38.242 [2024-11-20 13:59:45.844614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:38.242 [2024-11-20 13:59:45.844621] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:38.242 [2024-11-20 13:59:45.844629] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:38.242 [2024-11-20 13:59:45.844636] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:38.242 [2024-11-20 13:59:45.844643] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:38.242 [2024-11-20 13:59:45.844650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:38.242 [2024-11-20 13:59:45.844657] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:38.242 [2024-11-20 13:59:45.844663] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:38.242 [2024-11-20 13:59:45.844671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.242 [2024-11-20 13:59:45.844683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:38.242 [2024-11-20 13:59:45.844691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.873 ms 00:39:38.242 [2024-11-20 13:59:45.844699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.864939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.242 [2024-11-20 13:59:45.864999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:38.242 [2024-11-20 13:59:45.865013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.247 ms 00:39:38.242 [2024-11-20 13:59:45.865022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.865693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:38.242 [2024-11-20 13:59:45.865729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:38.242 [2024-11-20 13:59:45.865741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:39:38.242 [2024-11-20 13:59:45.865750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.919596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.242 [2024-11-20 13:59:45.919739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:38.242 [2024-11-20 13:59:45.919756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.242 [2024-11-20 13:59:45.919764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.919860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.242 [2024-11-20 13:59:45.919869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:38.242 [2024-11-20 13:59:45.919877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.242 [2024-11-20 13:59:45.919884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.919933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.242 [2024-11-20 13:59:45.919944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:38.242 [2024-11-20 13:59:45.919952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.242 [2024-11-20 13:59:45.919960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.242 [2024-11-20 13:59:45.919979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.242 [2024-11-20 13:59:45.919992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:38.242 [2024-11-20 13:59:45.920000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.242 [2024-11-20 13:59:45.920007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.042815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.042957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:38.503 [2024-11-20 13:59:46.042975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.042984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.143543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.143607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:38.503 [2024-11-20 13:59:46.143636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.143643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.143773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.143784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:38.503 [2024-11-20 13:59:46.143793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.143801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.143829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.143837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:38.503 [2024-11-20 13:59:46.143852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.143860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.143955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.143973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:38.503 [2024-11-20 13:59:46.143981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.143989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.144029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.144039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:38.503 [2024-11-20 13:59:46.144053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.144060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.144100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.503 [2024-11-20 13:59:46.144109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:38.503 [2024-11-20 13:59:46.144116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.503 [2024-11-20 13:59:46.144123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.503 [2024-11-20 13:59:46.144167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:38.504 [2024-11-20 13:59:46.144176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:38.504 [2024-11-20 13:59:46.144187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:38.504 [2024-11-20 13:59:46.144194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:38.504 [2024-11-20 13:59:46.144341] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 513.974 ms, result 0 00:39:39.885 00:39:39.885 00:39:39.885 13:59:47 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=79653 00:39:39.885 13:59:47 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:39:39.885 13:59:47 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 79653 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 79653 ']' 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:39.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:39.885 13:59:47 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:39.885 [2024-11-20 13:59:47.303933] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:39.885 [2024-11-20 13:59:47.304069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79653 ] 00:39:39.885 [2024-11-20 13:59:47.480682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:39.885 [2024-11-20 13:59:47.592603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.851 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:40.851 13:59:48 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:39:40.851 13:59:48 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:39:41.111 [2024-11-20 13:59:48.654280] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:41.111 [2024-11-20 13:59:48.654341] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:41.372 [2024-11-20 13:59:48.835803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.835952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:41.372 [2024-11-20 13:59:48.835977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:41.372 [2024-11-20 13:59:48.835985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.839458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.839532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:41.372 [2024-11-20 13:59:48.839548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:39:41.372 [2024-11-20 13:59:48.839555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.839673] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:41.372 [2024-11-20 13:59:48.840645] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:41.372 [2024-11-20 13:59:48.840678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.840687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:41.372 [2024-11-20 13:59:48.840697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.018 ms 00:39:41.372 [2024-11-20 13:59:48.840705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.842140] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:41.372 [2024-11-20 13:59:48.860742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.860788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:41.372 [2024-11-20 13:59:48.860801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.641 ms 00:39:41.372 [2024-11-20 13:59:48.860814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.860906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.860923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:41.372 [2024-11-20 13:59:48.860932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:39:41.372 [2024-11-20 13:59:48.860944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.867525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.867569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:41.372 [2024-11-20 13:59:48.867595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.535 ms 00:39:41.372 [2024-11-20 13:59:48.867614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.867770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.867789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:41.372 [2024-11-20 13:59:48.867799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:39:41.372 [2024-11-20 13:59:48.867811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.867847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.867860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:41.372 [2024-11-20 13:59:48.867868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:39:41.372 [2024-11-20 13:59:48.867895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.867921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:41.372 [2024-11-20 13:59:48.872591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.872620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:41.372 [2024-11-20 13:59:48.872634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.681 ms 00:39:41.372 [2024-11-20 13:59:48.872641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.872708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.872733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:41.372 [2024-11-20 13:59:48.872747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:41.372 [2024-11-20 13:59:48.872760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.872786] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:41.372 [2024-11-20 13:59:48.872808] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:41.372 [2024-11-20 13:59:48.872858] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:41.372 [2024-11-20 13:59:48.872877] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:41.372 [2024-11-20 13:59:48.872970] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:41.372 [2024-11-20 13:59:48.872981] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:41.372 [2024-11-20 13:59:48.873003] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:41.372 [2024-11-20 13:59:48.873014] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:41.372 [2024-11-20 13:59:48.873027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:41.372 [2024-11-20 13:59:48.873035] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:41.372 [2024-11-20 13:59:48.873047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:41.372 [2024-11-20 13:59:48.873054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:41.372 [2024-11-20 13:59:48.873070] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:41.372 [2024-11-20 13:59:48.873078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.873090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:41.372 [2024-11-20 13:59:48.873098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:39:41.372 [2024-11-20 13:59:48.873110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.873188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.372 [2024-11-20 13:59:48.873201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:41.372 [2024-11-20 13:59:48.873209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:39:41.372 [2024-11-20 13:59:48.873221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.372 [2024-11-20 13:59:48.873306] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:41.372 [2024-11-20 13:59:48.873321] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:41.372 [2024-11-20 13:59:48.873329] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:41.372 [2024-11-20 13:59:48.873341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.372 [2024-11-20 13:59:48.873348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:41.372 [2024-11-20 13:59:48.873359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:41.372 [2024-11-20 13:59:48.873366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:41.372 [2024-11-20 13:59:48.873384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:41.372 [2024-11-20 13:59:48.873391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:41.372 [2024-11-20 13:59:48.873403] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:41.372 [2024-11-20 13:59:48.873411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:41.372 [2024-11-20 13:59:48.873421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:41.372 [2024-11-20 13:59:48.873428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:41.372 [2024-11-20 13:59:48.873436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:41.372 [2024-11-20 13:59:48.873443] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:41.372 [2024-11-20 13:59:48.873450] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:41.373 [2024-11-20 13:59:48.873464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:41.373 [2024-11-20 13:59:48.873494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:41.373 [2024-11-20 13:59:48.873519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:41.373 [2024-11-20 13:59:48.873540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:41.373 [2024-11-20 13:59:48.873562] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:41.373 [2024-11-20 13:59:48.873582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:41.373 [2024-11-20 13:59:48.873598] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:41.373 [2024-11-20 13:59:48.873606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:41.373 [2024-11-20 13:59:48.873612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:41.373 [2024-11-20 13:59:48.873620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:41.373 [2024-11-20 13:59:48.873626] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:41.373 [2024-11-20 13:59:48.873635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:41.373 [2024-11-20 13:59:48.873649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:41.373 [2024-11-20 13:59:48.873656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:41.373 [2024-11-20 13:59:48.873673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:41.373 [2024-11-20 13:59:48.873681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:41.373 [2024-11-20 13:59:48.873697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:41.373 [2024-11-20 13:59:48.873704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:41.373 [2024-11-20 13:59:48.873712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:41.373 [2024-11-20 13:59:48.873834] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:41.373 [2024-11-20 13:59:48.873857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:41.373 [2024-11-20 13:59:48.873875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:41.373 [2024-11-20 13:59:48.873897] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:41.373 [2024-11-20 13:59:48.873928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.873960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:41.373 [2024-11-20 13:59:48.873988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:41.373 [2024-11-20 13:59:48.874019] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:41.373 [2024-11-20 13:59:48.874046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:41.373 [2024-11-20 13:59:48.874077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:41.373 [2024-11-20 13:59:48.874163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:41.373 [2024-11-20 13:59:48.874215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:41.373 [2024-11-20 13:59:48.874244] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:41.373 [2024-11-20 13:59:48.874330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:41.373 [2024-11-20 13:59:48.874370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:41.373 [2024-11-20 13:59:48.874588] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:41.373 [2024-11-20 13:59:48.874635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:41.373 [2024-11-20 13:59:48.874727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:41.373 [2024-11-20 13:59:48.874771] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:41.373 [2024-11-20 13:59:48.874810] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:41.373 [2024-11-20 13:59:48.874850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.874880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:41.373 [2024-11-20 13:59:48.874917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.593 ms 00:39:41.373 [2024-11-20 13:59:48.874945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.914086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.914220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:41.373 [2024-11-20 13:59:48.914259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.113 ms 00:39:41.373 [2024-11-20 13:59:48.914285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.914449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.914478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:41.373 [2024-11-20 13:59:48.914522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:39:41.373 [2024-11-20 13:59:48.914543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.961645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.961784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:41.373 [2024-11-20 13:59:48.961821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.145 ms 00:39:41.373 [2024-11-20 13:59:48.961844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.961981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.962030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:41.373 [2024-11-20 13:59:48.962045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:41.373 [2024-11-20 13:59:48.962052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.962475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.373 [2024-11-20 13:59:48.962490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:41.373 [2024-11-20 13:59:48.962502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:39:41.373 [2024-11-20 13:59:48.962509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.373 [2024-11-20 13:59:48.962633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:48.962646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:41.374 [2024-11-20 13:59:48.962658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:39:41.374 [2024-11-20 13:59:48.962666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:48.983548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:48.983657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:41.374 [2024-11-20 13:59:48.983696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.894 ms 00:39:41.374 [2024-11-20 13:59:48.983705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:49.011740] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:41.374 [2024-11-20 13:59:49.011774] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:41.374 [2024-11-20 13:59:49.011796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:49.011804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:41.374 [2024-11-20 13:59:49.011815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.989 ms 00:39:41.374 [2024-11-20 13:59:49.011822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:49.039389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:49.039477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:41.374 [2024-11-20 13:59:49.039498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.540 ms 00:39:41.374 [2024-11-20 13:59:49.039506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:49.057229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:49.057318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:41.374 [2024-11-20 13:59:49.057360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.659 ms 00:39:41.374 [2024-11-20 13:59:49.057367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:49.074612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:49.074646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:41.374 [2024-11-20 13:59:49.074661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.205 ms 00:39:41.374 [2024-11-20 13:59:49.074684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.374 [2024-11-20 13:59:49.075532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.374 [2024-11-20 13:59:49.075565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:41.374 [2024-11-20 13:59:49.075580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.715 ms 00:39:41.374 [2024-11-20 13:59:49.075589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.634 [2024-11-20 13:59:49.160082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.634 [2024-11-20 13:59:49.160170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:41.634 [2024-11-20 13:59:49.160189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 84.610 ms 00:39:41.634 [2024-11-20 13:59:49.160198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.634 [2024-11-20 13:59:49.171154] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:41.634 [2024-11-20 13:59:49.187060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.634 [2024-11-20 13:59:49.187126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:41.634 [2024-11-20 13:59:49.187138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.794 ms 00:39:41.634 [2024-11-20 13:59:49.187165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.634 [2024-11-20 13:59:49.187290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.634 [2024-11-20 13:59:49.187303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:41.634 [2024-11-20 13:59:49.187312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:39:41.634 [2024-11-20 13:59:49.187321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.634 [2024-11-20 13:59:49.187380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.634 [2024-11-20 13:59:49.187391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:41.634 [2024-11-20 13:59:49.187399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:39:41.635 [2024-11-20 13:59:49.187412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.635 [2024-11-20 13:59:49.187435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.635 [2024-11-20 13:59:49.187445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:41.635 [2024-11-20 13:59:49.187453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:41.635 [2024-11-20 13:59:49.187465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.635 [2024-11-20 13:59:49.187499] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:41.635 [2024-11-20 13:59:49.187513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.635 [2024-11-20 13:59:49.187524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:41.635 [2024-11-20 13:59:49.187533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:41.635 [2024-11-20 13:59:49.187544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.635 [2024-11-20 13:59:49.223082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.635 [2024-11-20 13:59:49.223121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:41.635 [2024-11-20 13:59:49.223139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.576 ms 00:39:41.635 [2024-11-20 13:59:49.223147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.635 [2024-11-20 13:59:49.223262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.635 [2024-11-20 13:59:49.223273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:41.635 [2024-11-20 13:59:49.223291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:39:41.635 [2024-11-20 13:59:49.223298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.635 [2024-11-20 13:59:49.224399] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:41.635 [2024-11-20 13:59:49.228893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 388.927 ms, result 0 00:39:41.635 [2024-11-20 13:59:49.229945] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:41.635 Some configs were skipped because the RPC state that can call them passed over. 00:39:41.635 13:59:49 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:39:41.895 [2024-11-20 13:59:49.488934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:41.895 [2024-11-20 13:59:49.489004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:41.895 [2024-11-20 13:59:49.489021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.545 ms 00:39:41.895 [2024-11-20 13:59:49.489033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:41.895 [2024-11-20 13:59:49.489076] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.697 ms, result 0 00:39:41.895 true 00:39:41.895 13:59:49 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:39:42.155 [2024-11-20 13:59:49.700493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:42.155 [2024-11-20 13:59:49.700546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:39:42.155 [2024-11-20 13:59:49.700564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:39:42.155 [2024-11-20 13:59:49.700573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:42.155 [2024-11-20 13:59:49.700617] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.446 ms, result 0 00:39:42.155 true 00:39:42.155 13:59:49 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 79653 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79653 ']' 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79653 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79653 00:39:42.155 killing process with pid 79653 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79653' 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 79653 00:39:42.155 13:59:49 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 79653 00:39:43.535 [2024-11-20 13:59:50.977291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.535 [2024-11-20 13:59:50.977370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:43.535 [2024-11-20 13:59:50.977388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:39:43.535 [2024-11-20 13:59:50.977401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.535 [2024-11-20 13:59:50.977426] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:43.535 [2024-11-20 13:59:50.982387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.535 [2024-11-20 13:59:50.982423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:43.535 [2024-11-20 13:59:50.982439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.949 ms 00:39:43.535 [2024-11-20 13:59:50.982447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.535 [2024-11-20 13:59:50.982768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.535 [2024-11-20 13:59:50.982782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:43.535 [2024-11-20 13:59:50.982792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:39:43.535 [2024-11-20 13:59:50.982800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.535 [2024-11-20 13:59:50.986391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.535 [2024-11-20 13:59:50.986428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:43.535 [2024-11-20 13:59:50.986440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.575 ms 00:39:43.535 [2024-11-20 13:59:50.986448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.535 [2024-11-20 13:59:50.992194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:50.992227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:43.536 [2024-11-20 13:59:50.992239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.703 ms 00:39:43.536 [2024-11-20 13:59:50.992246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.009198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.009250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:43.536 [2024-11-20 13:59:51.009269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.915 ms 00:39:43.536 [2024-11-20 13:59:51.009297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.021622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.021760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:43.536 [2024-11-20 13:59:51.021781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.262 ms 00:39:43.536 [2024-11-20 13:59:51.021790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.021954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.021967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:43.536 [2024-11-20 13:59:51.021978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.090 ms 00:39:43.536 [2024-11-20 13:59:51.021987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.038289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.038330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:43.536 [2024-11-20 13:59:51.038343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.309 ms 00:39:43.536 [2024-11-20 13:59:51.038352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.054231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.054265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:43.536 [2024-11-20 13:59:51.054286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.848 ms 00:39:43.536 [2024-11-20 13:59:51.054294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.069397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.069429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:43.536 [2024-11-20 13:59:51.069448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.075 ms 00:39:43.536 [2024-11-20 13:59:51.069455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.084047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.536 [2024-11-20 13:59:51.084127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:43.536 [2024-11-20 13:59:51.084147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.534 ms 00:39:43.536 [2024-11-20 13:59:51.084156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.536 [2024-11-20 13:59:51.084208] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:43.536 [2024-11-20 13:59:51.084225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:43.536 [2024-11-20 13:59:51.084894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.084993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:43.537 [2024-11-20 13:59:51.085316] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:43.537 [2024-11-20 13:59:51.085331] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:43.537 [2024-11-20 13:59:51.085362] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:43.537 [2024-11-20 13:59:51.085380] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:43.537 [2024-11-20 13:59:51.085387] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:43.537 [2024-11-20 13:59:51.085399] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:43.537 [2024-11-20 13:59:51.085407] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:43.537 [2024-11-20 13:59:51.085420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:43.537 [2024-11-20 13:59:51.085428] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:43.537 [2024-11-20 13:59:51.085438] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:43.537 [2024-11-20 13:59:51.085446] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:43.537 [2024-11-20 13:59:51.085457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.537 [2024-11-20 13:59:51.085465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:43.537 [2024-11-20 13:59:51.085478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.255 ms 00:39:43.537 [2024-11-20 13:59:51.085489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.107695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.537 [2024-11-20 13:59:51.107750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:43.537 [2024-11-20 13:59:51.107771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.216 ms 00:39:43.537 [2024-11-20 13:59:51.107780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.108447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:43.537 [2024-11-20 13:59:51.108469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:43.537 [2024-11-20 13:59:51.108489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.605 ms 00:39:43.537 [2024-11-20 13:59:51.108497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.180841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.537 [2024-11-20 13:59:51.180987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:43.537 [2024-11-20 13:59:51.181007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.537 [2024-11-20 13:59:51.181016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.181168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.537 [2024-11-20 13:59:51.181179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:43.537 [2024-11-20 13:59:51.181193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.537 [2024-11-20 13:59:51.181201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.181263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.537 [2024-11-20 13:59:51.181274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:43.537 [2024-11-20 13:59:51.181288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.537 [2024-11-20 13:59:51.181295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.537 [2024-11-20 13:59:51.181318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.537 [2024-11-20 13:59:51.181327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:43.537 [2024-11-20 13:59:51.181336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.537 [2024-11-20 13:59:51.181347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.319563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.319781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:43.800 [2024-11-20 13:59:51.319806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.319815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.430476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.430554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:43.800 [2024-11-20 13:59:51.430574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.430582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.430735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.430747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:43.800 [2024-11-20 13:59:51.430760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.430768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.430801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.430810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:43.800 [2024-11-20 13:59:51.430821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.430829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.430976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.430988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:43.800 [2024-11-20 13:59:51.430998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.431007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.431049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.431059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:43.800 [2024-11-20 13:59:51.431070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.431078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.431142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.431151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:43.800 [2024-11-20 13:59:51.431165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.431172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.431227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:43.800 [2024-11-20 13:59:51.431238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:43.800 [2024-11-20 13:59:51.431249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:43.800 [2024-11-20 13:59:51.431256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:43.800 [2024-11-20 13:59:51.431421] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 454.983 ms, result 0 00:39:44.740 13:59:52 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:45.000 [2024-11-20 13:59:52.528466] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:45.000 [2024-11-20 13:59:52.528578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79719 ] 00:39:45.000 [2024-11-20 13:59:52.705040] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:45.260 [2024-11-20 13:59:52.815119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:45.520 [2024-11-20 13:59:53.162610] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:45.520 [2024-11-20 13:59:53.162684] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:39:45.781 [2024-11-20 13:59:53.320773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.781 [2024-11-20 13:59:53.320905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:39:45.781 [2024-11-20 13:59:53.320923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:45.781 [2024-11-20 13:59:53.320932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.781 [2024-11-20 13:59:53.323948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.781 [2024-11-20 13:59:53.324029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:45.781 [2024-11-20 13:59:53.324044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:39:45.781 [2024-11-20 13:59:53.324052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.781 [2024-11-20 13:59:53.324141] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:39:45.782 [2024-11-20 13:59:53.325179] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:39:45.782 [2024-11-20 13:59:53.325213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.325222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:45.782 [2024-11-20 13:59:53.325231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.083 ms 00:39:45.782 [2024-11-20 13:59:53.325238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.326687] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:39:45.782 [2024-11-20 13:59:53.345121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.345161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:39:45.782 [2024-11-20 13:59:53.345173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:39:45.782 [2024-11-20 13:59:53.345197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.345291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.345305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:39:45.782 [2024-11-20 13:59:53.345314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:39:45.782 [2024-11-20 13:59:53.345321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.352090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.352120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:45.782 [2024-11-20 13:59:53.352131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.742 ms 00:39:45.782 [2024-11-20 13:59:53.352140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.352244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.352261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:45.782 [2024-11-20 13:59:53.352270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:39:45.782 [2024-11-20 13:59:53.352278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.352311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.352333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:39:45.782 [2024-11-20 13:59:53.352341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:39:45.782 [2024-11-20 13:59:53.352348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.352373] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:39:45.782 [2024-11-20 13:59:53.356968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.357001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:45.782 [2024-11-20 13:59:53.357011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.613 ms 00:39:45.782 [2024-11-20 13:59:53.357018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.357080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.357090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:39:45.782 [2024-11-20 13:59:53.357099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:39:45.782 [2024-11-20 13:59:53.357106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.357125] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:39:45.782 [2024-11-20 13:59:53.357149] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:39:45.782 [2024-11-20 13:59:53.357183] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:39:45.782 [2024-11-20 13:59:53.357198] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:39:45.782 [2024-11-20 13:59:53.357286] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:39:45.782 [2024-11-20 13:59:53.357296] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:39:45.782 [2024-11-20 13:59:53.357306] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:39:45.782 [2024-11-20 13:59:53.357317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357329] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:39:45.782 [2024-11-20 13:59:53.357346] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:39:45.782 [2024-11-20 13:59:53.357355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:39:45.782 [2024-11-20 13:59:53.357362] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:39:45.782 [2024-11-20 13:59:53.357370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.357377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:39:45.782 [2024-11-20 13:59:53.357385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:39:45.782 [2024-11-20 13:59:53.357392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.357464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.782 [2024-11-20 13:59:53.357475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:39:45.782 [2024-11-20 13:59:53.357483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:39:45.782 [2024-11-20 13:59:53.357490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.782 [2024-11-20 13:59:53.357579] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:39:45.782 [2024-11-20 13:59:53.357590] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:39:45.782 [2024-11-20 13:59:53.357597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357605] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357612] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:39:45.782 [2024-11-20 13:59:53.357619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:39:45.782 [2024-11-20 13:59:53.357643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:45.782 [2024-11-20 13:59:53.357657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:39:45.782 [2024-11-20 13:59:53.357664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:39:45.782 [2024-11-20 13:59:53.357671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:39:45.782 [2024-11-20 13:59:53.357691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:39:45.782 [2024-11-20 13:59:53.357698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:39:45.782 [2024-11-20 13:59:53.357705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:39:45.782 [2024-11-20 13:59:53.357738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:39:45.782 [2024-11-20 13:59:53.357761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:39:45.782 [2024-11-20 13:59:53.357768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:45.782 [2024-11-20 13:59:53.357774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:39:45.783 [2024-11-20 13:59:53.357781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:45.783 [2024-11-20 13:59:53.357794] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:39:45.783 [2024-11-20 13:59:53.357800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:45.783 [2024-11-20 13:59:53.357812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:39:45.783 [2024-11-20 13:59:53.357819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:39:45.783 [2024-11-20 13:59:53.357832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:39:45.783 [2024-11-20 13:59:53.357838] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:45.783 [2024-11-20 13:59:53.357850] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:39:45.783 [2024-11-20 13:59:53.357857] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:39:45.783 [2024-11-20 13:59:53.357863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:39:45.783 [2024-11-20 13:59:53.357869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:39:45.783 [2024-11-20 13:59:53.357875] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:39:45.783 [2024-11-20 13:59:53.357881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:39:45.783 [2024-11-20 13:59:53.357894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:39:45.783 [2024-11-20 13:59:53.357901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357907] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:39:45.783 [2024-11-20 13:59:53.357915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:39:45.783 [2024-11-20 13:59:53.357931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:39:45.783 [2024-11-20 13:59:53.357942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:39:45.783 [2024-11-20 13:59:53.357949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:39:45.783 [2024-11-20 13:59:53.357956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:39:45.783 [2024-11-20 13:59:53.357963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:39:45.783 [2024-11-20 13:59:53.357970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:39:45.783 [2024-11-20 13:59:53.357976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:39:45.783 [2024-11-20 13:59:53.357983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:39:45.783 [2024-11-20 13:59:53.357990] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:39:45.783 [2024-11-20 13:59:53.357999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:39:45.783 [2024-11-20 13:59:53.358015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:39:45.783 [2024-11-20 13:59:53.358021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:39:45.783 [2024-11-20 13:59:53.358028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:39:45.783 [2024-11-20 13:59:53.358035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:39:45.783 [2024-11-20 13:59:53.358043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:39:45.783 [2024-11-20 13:59:53.358049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:39:45.783 [2024-11-20 13:59:53.358057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:39:45.783 [2024-11-20 13:59:53.358063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:39:45.783 [2024-11-20 13:59:53.358070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358091] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:39:45.783 [2024-11-20 13:59:53.358105] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:39:45.783 [2024-11-20 13:59:53.358113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358120] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:39:45.783 [2024-11-20 13:59:53.358131] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:39:45.783 [2024-11-20 13:59:53.358138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:39:45.783 [2024-11-20 13:59:53.358144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:39:45.783 [2024-11-20 13:59:53.358152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.358160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:39:45.783 [2024-11-20 13:59:53.358171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:39:45.783 [2024-11-20 13:59:53.358178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.394340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.394386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:45.783 [2024-11-20 13:59:53.394400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.171 ms 00:39:45.783 [2024-11-20 13:59:53.394408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.394556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.394572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:39:45.783 [2024-11-20 13:59:53.394581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:39:45.783 [2024-11-20 13:59:53.394589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.449180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.449225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:45.783 [2024-11-20 13:59:53.449237] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.673 ms 00:39:45.783 [2024-11-20 13:59:53.449264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.449379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.449389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:45.783 [2024-11-20 13:59:53.449398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:39:45.783 [2024-11-20 13:59:53.449405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.449866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.449878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:45.783 [2024-11-20 13:59:53.449887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.442 ms 00:39:45.783 [2024-11-20 13:59:53.449900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.450013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.450026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:45.783 [2024-11-20 13:59:53.450034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:39:45.783 [2024-11-20 13:59:53.450042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.783 [2024-11-20 13:59:53.468344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.783 [2024-11-20 13:59:53.468384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:45.784 [2024-11-20 13:59:53.468396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.315 ms 00:39:45.784 [2024-11-20 13:59:53.468419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:45.784 [2024-11-20 13:59:53.486045] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:39:45.784 [2024-11-20 13:59:53.486080] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:39:45.784 [2024-11-20 13:59:53.486092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:45.784 [2024-11-20 13:59:53.486100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:39:45.784 [2024-11-20 13:59:53.486107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.588 ms 00:39:45.784 [2024-11-20 13:59:53.486131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.514697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.514750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:39:46.048 [2024-11-20 13:59:53.514761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.548 ms 00:39:46.048 [2024-11-20 13:59:53.514785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.532052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.532085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:39:46.048 [2024-11-20 13:59:53.532095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.228 ms 00:39:46.048 [2024-11-20 13:59:53.532103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.549247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.549281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:39:46.048 [2024-11-20 13:59:53.549291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.111 ms 00:39:46.048 [2024-11-20 13:59:53.549298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.550060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.550089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:39:46.048 [2024-11-20 13:59:53.550100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.653 ms 00:39:46.048 [2024-11-20 13:59:53.550108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.633379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.633432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:39:46.048 [2024-11-20 13:59:53.633445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.405 ms 00:39:46.048 [2024-11-20 13:59:53.633469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.645007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:39:46.048 [2024-11-20 13:59:53.661308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.661467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:39:46.048 [2024-11-20 13:59:53.661484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.759 ms 00:39:46.048 [2024-11-20 13:59:53.661501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.661654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.661665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:39:46.048 [2024-11-20 13:59:53.661674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:39:46.048 [2024-11-20 13:59:53.661681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.048 [2024-11-20 13:59:53.661761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.048 [2024-11-20 13:59:53.661772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:39:46.049 [2024-11-20 13:59:53.661782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:39:46.049 [2024-11-20 13:59:53.661791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.049 [2024-11-20 13:59:53.661837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.049 [2024-11-20 13:59:53.661851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:39:46.049 [2024-11-20 13:59:53.661860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:39:46.049 [2024-11-20 13:59:53.661869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.049 [2024-11-20 13:59:53.661903] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:39:46.049 [2024-11-20 13:59:53.661912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.049 [2024-11-20 13:59:53.661920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:39:46.049 [2024-11-20 13:59:53.661928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:39:46.049 [2024-11-20 13:59:53.661936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.049 [2024-11-20 13:59:53.697720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.049 [2024-11-20 13:59:53.697760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:39:46.049 [2024-11-20 13:59:53.697772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.829 ms 00:39:46.049 [2024-11-20 13:59:53.697794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.049 [2024-11-20 13:59:53.697905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:46.049 [2024-11-20 13:59:53.697917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:39:46.049 [2024-11-20 13:59:53.697926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:39:46.049 [2024-11-20 13:59:53.697933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:46.049 [2024-11-20 13:59:53.698952] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:46.049 [2024-11-20 13:59:53.703148] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 378.496 ms, result 0 00:39:46.049 [2024-11-20 13:59:53.704083] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:46.049 [2024-11-20 13:59:53.722162] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:39:47.435  [2024-11-20T13:59:56.090Z] Copying: 33/256 [MB] (33 MBps) [2024-11-20T13:59:57.028Z] Copying: 64/256 [MB] (31 MBps) [2024-11-20T13:59:57.966Z] Copying: 96/256 [MB] (31 MBps) [2024-11-20T13:59:58.904Z] Copying: 128/256 [MB] (31 MBps) [2024-11-20T13:59:59.839Z] Copying: 160/256 [MB] (31 MBps) [2024-11-20T14:00:00.781Z] Copying: 191/256 [MB] (31 MBps) [2024-11-20T14:00:02.158Z] Copying: 222/256 [MB] (31 MBps) [2024-11-20T14:00:02.158Z] Copying: 253/256 [MB] (30 MBps) [2024-11-20T14:00:02.416Z] Copying: 256/256 [MB] (average 31 MBps)[2024-11-20 14:00:02.265868] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:39:54.697 [2024-11-20 14:00:02.296037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.697 [2024-11-20 14:00:02.296085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:39:54.697 [2024-11-20 14:00:02.296100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:39:54.697 [2024-11-20 14:00:02.296115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.296140] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:39:54.698 [2024-11-20 14:00:02.300516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.300546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:39:54.698 [2024-11-20 14:00:02.300557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.370 ms 00:39:54.698 [2024-11-20 14:00:02.300565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.300827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.300839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:39:54.698 [2024-11-20 14:00:02.300871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.235 ms 00:39:54.698 [2024-11-20 14:00:02.300879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.304106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.304170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:39:54.698 [2024-11-20 14:00:02.304183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.217 ms 00:39:54.698 [2024-11-20 14:00:02.304191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.309780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.309809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:39:54.698 [2024-11-20 14:00:02.309818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.575 ms 00:39:54.698 [2024-11-20 14:00:02.309825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.345399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.345436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:39:54.698 [2024-11-20 14:00:02.345448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.571 ms 00:39:54.698 [2024-11-20 14:00:02.345472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.365756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.365819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:39:54.698 [2024-11-20 14:00:02.365837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.269 ms 00:39:54.698 [2024-11-20 14:00:02.365860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.366014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.366025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:39:54.698 [2024-11-20 14:00:02.366034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:39:54.698 [2024-11-20 14:00:02.366041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.698 [2024-11-20 14:00:02.401629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.698 [2024-11-20 14:00:02.401749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:39:54.698 [2024-11-20 14:00:02.401764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.625 ms 00:39:54.698 [2024-11-20 14:00:02.401771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.958 [2024-11-20 14:00:02.436911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.958 [2024-11-20 14:00:02.436948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:39:54.958 [2024-11-20 14:00:02.436959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.156 ms 00:39:54.958 [2024-11-20 14:00:02.436966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.958 [2024-11-20 14:00:02.472062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.958 [2024-11-20 14:00:02.472113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:39:54.958 [2024-11-20 14:00:02.472125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.112 ms 00:39:54.959 [2024-11-20 14:00:02.472132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.959 [2024-11-20 14:00:02.506788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.959 [2024-11-20 14:00:02.506825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:39:54.959 [2024-11-20 14:00:02.506835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.623 ms 00:39:54.959 [2024-11-20 14:00:02.506842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.959 [2024-11-20 14:00:02.506906] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:39:54.959 [2024-11-20 14:00:02.506921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.506997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:39:54.959 [2024-11-20 14:00:02.507398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:39:54.960 [2024-11-20 14:00:02.507675] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:39:54.960 [2024-11-20 14:00:02.507683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: de3f65f3-0236-49e4-9c47-e95c64595e9c 00:39:54.960 [2024-11-20 14:00:02.507691] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:39:54.960 [2024-11-20 14:00:02.507698] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:39:54.960 [2024-11-20 14:00:02.507706] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:39:54.960 [2024-11-20 14:00:02.507728] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:39:54.960 [2024-11-20 14:00:02.507736] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:39:54.960 [2024-11-20 14:00:02.507744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:39:54.960 [2024-11-20 14:00:02.507752] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:39:54.960 [2024-11-20 14:00:02.507758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:39:54.960 [2024-11-20 14:00:02.507765] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:39:54.960 [2024-11-20 14:00:02.507773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.960 [2024-11-20 14:00:02.507785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:39:54.960 [2024-11-20 14:00:02.507793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.870 ms 00:39:54.960 [2024-11-20 14:00:02.507801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.527789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.960 [2024-11-20 14:00:02.527824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:39:54.960 [2024-11-20 14:00:02.527835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.005 ms 00:39:54.960 [2024-11-20 14:00:02.527842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.528328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:39:54.960 [2024-11-20 14:00:02.528337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:39:54.960 [2024-11-20 14:00:02.528346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:39:54.960 [2024-11-20 14:00:02.528353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.581462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.960 [2024-11-20 14:00:02.581558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:39:54.960 [2024-11-20 14:00:02.581590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.960 [2024-11-20 14:00:02.581599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.581710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.960 [2024-11-20 14:00:02.581720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:39:54.960 [2024-11-20 14:00:02.581728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.960 [2024-11-20 14:00:02.581750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.581805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.960 [2024-11-20 14:00:02.581817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:39:54.960 [2024-11-20 14:00:02.581830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.960 [2024-11-20 14:00:02.581838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:54.960 [2024-11-20 14:00:02.581857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:54.960 [2024-11-20 14:00:02.581867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:39:54.960 [2024-11-20 14:00:02.581875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:54.960 [2024-11-20 14:00:02.581882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.705348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.705506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:39:55.220 [2024-11-20 14:00:02.705523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.705531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.809668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.809843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:39:55.220 [2024-11-20 14:00:02.809861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.809870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.809979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.809988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:39:55.220 [2024-11-20 14:00:02.809997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.810042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:39:55.220 [2024-11-20 14:00:02.810055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.810184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:39:55.220 [2024-11-20 14:00:02.810193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.810247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:39:55.220 [2024-11-20 14:00:02.810255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.810317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:39:55.220 [2024-11-20 14:00:02.810334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:39:55.220 [2024-11-20 14:00:02.810397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:39:55.220 [2024-11-20 14:00:02.810409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:39:55.220 [2024-11-20 14:00:02.810416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:39:55.220 [2024-11-20 14:00:02.810561] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.519 ms, result 0 00:39:56.159 00:39:56.159 00:39:56.159 14:00:03 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:56.728 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:39:56.728 Process with pid 79653 is not found 00:39:56.728 14:00:04 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 79653 00:39:56.728 14:00:04 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 79653 ']' 00:39:56.728 14:00:04 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 79653 00:39:56.728 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79653) - No such process 00:39:56.728 14:00:04 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 79653 is not found' 00:39:56.728 ************************************ 00:39:56.728 END TEST ftl_trim 00:39:56.728 ************************************ 00:39:56.728 00:39:56.728 real 1m6.761s 00:39:56.728 user 1m37.938s 00:39:56.728 sys 0m6.381s 00:39:56.728 14:00:04 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:56.728 14:00:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:39:56.989 14:00:04 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:39:56.989 14:00:04 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:39:56.989 14:00:04 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:56.989 14:00:04 ftl -- common/autotest_common.sh@10 -- # set +x 00:39:56.989 ************************************ 00:39:56.989 START TEST ftl_restore 00:39:56.989 ************************************ 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:39:56.989 * Looking for test storage... 00:39:56.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lcov --version 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:56.989 14:00:04 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.989 --rc genhtml_branch_coverage=1 00:39:56.989 --rc genhtml_function_coverage=1 00:39:56.989 --rc genhtml_legend=1 00:39:56.989 --rc geninfo_all_blocks=1 00:39:56.989 --rc geninfo_unexecuted_blocks=1 00:39:56.989 00:39:56.989 ' 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.989 --rc genhtml_branch_coverage=1 00:39:56.989 --rc genhtml_function_coverage=1 00:39:56.989 --rc genhtml_legend=1 00:39:56.989 --rc geninfo_all_blocks=1 00:39:56.989 --rc geninfo_unexecuted_blocks=1 00:39:56.989 00:39:56.989 ' 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.989 --rc genhtml_branch_coverage=1 00:39:56.989 --rc genhtml_function_coverage=1 00:39:56.989 --rc genhtml_legend=1 00:39:56.989 --rc geninfo_all_blocks=1 00:39:56.989 --rc geninfo_unexecuted_blocks=1 00:39:56.989 00:39:56.989 ' 00:39:56.989 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:56.989 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:56.989 --rc genhtml_branch_coverage=1 00:39:56.989 --rc genhtml_function_coverage=1 00:39:56.989 --rc genhtml_legend=1 00:39:56.989 --rc geninfo_all_blocks=1 00:39:56.989 --rc geninfo_unexecuted_blocks=1 00:39:56.989 00:39:56.989 ' 00:39:56.989 14:00:04 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HkwpizQgBN 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79906 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:39:57.250 14:00:04 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79906 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79906 ']' 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.250 14:00:04 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:39:57.250 [2024-11-20 14:00:04.845505] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:39:57.250 [2024-11-20 14:00:04.845699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79906 ] 00:39:57.510 [2024-11-20 14:00:05.021311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:57.510 [2024-11-20 14:00:05.129789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:58.473 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:58.473 14:00:05 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:39:58.473 14:00:05 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:39:58.733 14:00:06 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:39:58.733 14:00:06 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:39:58.733 14:00:06 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:39:58.733 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:39:58.733 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:58.733 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:58.733 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:58.733 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:39:58.994 { 00:39:58.994 "name": "nvme0n1", 00:39:58.994 "aliases": [ 00:39:58.994 "0bf12345-a9ff-454b-8e49-00541c48fd23" 00:39:58.994 ], 00:39:58.994 "product_name": "NVMe disk", 00:39:58.994 "block_size": 4096, 00:39:58.994 "num_blocks": 1310720, 00:39:58.994 "uuid": "0bf12345-a9ff-454b-8e49-00541c48fd23", 00:39:58.994 "numa_id": -1, 00:39:58.994 "assigned_rate_limits": { 00:39:58.994 "rw_ios_per_sec": 0, 00:39:58.994 "rw_mbytes_per_sec": 0, 00:39:58.994 "r_mbytes_per_sec": 0, 00:39:58.994 "w_mbytes_per_sec": 0 00:39:58.994 }, 00:39:58.994 "claimed": true, 00:39:58.994 "claim_type": "read_many_write_one", 00:39:58.994 "zoned": false, 00:39:58.994 "supported_io_types": { 00:39:58.994 "read": true, 00:39:58.994 "write": true, 00:39:58.994 "unmap": true, 00:39:58.994 "flush": true, 00:39:58.994 "reset": true, 00:39:58.994 "nvme_admin": true, 00:39:58.994 "nvme_io": true, 00:39:58.994 "nvme_io_md": false, 00:39:58.994 "write_zeroes": true, 00:39:58.994 "zcopy": false, 00:39:58.994 "get_zone_info": false, 00:39:58.994 "zone_management": false, 00:39:58.994 "zone_append": false, 00:39:58.994 "compare": true, 00:39:58.994 "compare_and_write": false, 00:39:58.994 "abort": true, 00:39:58.994 "seek_hole": false, 00:39:58.994 "seek_data": false, 00:39:58.994 "copy": true, 00:39:58.994 "nvme_iov_md": false 00:39:58.994 }, 00:39:58.994 "driver_specific": { 00:39:58.994 "nvme": [ 00:39:58.994 { 00:39:58.994 "pci_address": "0000:00:11.0", 00:39:58.994 "trid": { 00:39:58.994 "trtype": "PCIe", 00:39:58.994 "traddr": "0000:00:11.0" 00:39:58.994 }, 00:39:58.994 "ctrlr_data": { 00:39:58.994 "cntlid": 0, 00:39:58.994 "vendor_id": "0x1b36", 00:39:58.994 "model_number": "QEMU NVMe Ctrl", 00:39:58.994 "serial_number": "12341", 00:39:58.994 "firmware_revision": "8.0.0", 00:39:58.994 "subnqn": "nqn.2019-08.org.qemu:12341", 00:39:58.994 "oacs": { 00:39:58.994 "security": 0, 00:39:58.994 "format": 1, 00:39:58.994 "firmware": 0, 00:39:58.994 "ns_manage": 1 00:39:58.994 }, 00:39:58.994 "multi_ctrlr": false, 00:39:58.994 "ana_reporting": false 00:39:58.994 }, 00:39:58.994 "vs": { 00:39:58.994 "nvme_version": "1.4" 00:39:58.994 }, 00:39:58.994 "ns_data": { 00:39:58.994 "id": 1, 00:39:58.994 "can_share": false 00:39:58.994 } 00:39:58.994 } 00:39:58.994 ], 00:39:58.994 "mp_policy": "active_passive" 00:39:58.994 } 00:39:58.994 } 00:39:58.994 ]' 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:39:58.994 14:00:06 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:39:58.994 14:00:06 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:39:58.994 14:00:06 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:39:58.994 14:00:06 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:39:58.994 14:00:06 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:39:58.994 14:00:06 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:39:59.255 14:00:06 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=edb534ee-0fe6-42ca-8693-c2d014d4d1a7 00:39:59.255 14:00:06 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:39:59.255 14:00:06 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u edb534ee-0fe6-42ca-8693-c2d014d4d1a7 00:39:59.514 14:00:07 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=fb487434-25e5-4d32-bc66-1d993ea1f8c1 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fb487434-25e5-4d32-bc66-1d993ea1f8c1 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:39:59.775 14:00:07 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:39:59.775 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:39:59.775 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:39:59.775 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:39:59.775 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:39:59.775 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:00.034 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:00.034 { 00:40:00.034 "name": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:00.034 "aliases": [ 00:40:00.034 "lvs/nvme0n1p0" 00:40:00.034 ], 00:40:00.035 "product_name": "Logical Volume", 00:40:00.035 "block_size": 4096, 00:40:00.035 "num_blocks": 26476544, 00:40:00.035 "uuid": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:00.035 "assigned_rate_limits": { 00:40:00.035 "rw_ios_per_sec": 0, 00:40:00.035 "rw_mbytes_per_sec": 0, 00:40:00.035 "r_mbytes_per_sec": 0, 00:40:00.035 "w_mbytes_per_sec": 0 00:40:00.035 }, 00:40:00.035 "claimed": false, 00:40:00.035 "zoned": false, 00:40:00.035 "supported_io_types": { 00:40:00.035 "read": true, 00:40:00.035 "write": true, 00:40:00.035 "unmap": true, 00:40:00.035 "flush": false, 00:40:00.035 "reset": true, 00:40:00.035 "nvme_admin": false, 00:40:00.035 "nvme_io": false, 00:40:00.035 "nvme_io_md": false, 00:40:00.035 "write_zeroes": true, 00:40:00.035 "zcopy": false, 00:40:00.035 "get_zone_info": false, 00:40:00.035 "zone_management": false, 00:40:00.035 "zone_append": false, 00:40:00.035 "compare": false, 00:40:00.035 "compare_and_write": false, 00:40:00.035 "abort": false, 00:40:00.035 "seek_hole": true, 00:40:00.035 "seek_data": true, 00:40:00.035 "copy": false, 00:40:00.035 "nvme_iov_md": false 00:40:00.035 }, 00:40:00.035 "driver_specific": { 00:40:00.035 "lvol": { 00:40:00.035 "lvol_store_uuid": "fb487434-25e5-4d32-bc66-1d993ea1f8c1", 00:40:00.035 "base_bdev": "nvme0n1", 00:40:00.035 "thin_provision": true, 00:40:00.035 "num_allocated_clusters": 0, 00:40:00.035 "snapshot": false, 00:40:00.035 "clone": false, 00:40:00.035 "esnap_clone": false 00:40:00.035 } 00:40:00.035 } 00:40:00.035 } 00:40:00.035 ]' 00:40:00.035 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:00.035 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:40:00.035 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:00.294 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:00.294 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:00.294 14:00:07 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:40:00.294 14:00:07 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:40:00.294 14:00:07 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:40:00.294 14:00:07 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:40:00.554 14:00:08 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:40:00.554 14:00:08 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:40:00.554 14:00:08 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:00.554 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:00.554 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:00.554 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:40:00.554 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:40:00.554 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:00.814 { 00:40:00.814 "name": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:00.814 "aliases": [ 00:40:00.814 "lvs/nvme0n1p0" 00:40:00.814 ], 00:40:00.814 "product_name": "Logical Volume", 00:40:00.814 "block_size": 4096, 00:40:00.814 "num_blocks": 26476544, 00:40:00.814 "uuid": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:00.814 "assigned_rate_limits": { 00:40:00.814 "rw_ios_per_sec": 0, 00:40:00.814 "rw_mbytes_per_sec": 0, 00:40:00.814 "r_mbytes_per_sec": 0, 00:40:00.814 "w_mbytes_per_sec": 0 00:40:00.814 }, 00:40:00.814 "claimed": false, 00:40:00.814 "zoned": false, 00:40:00.814 "supported_io_types": { 00:40:00.814 "read": true, 00:40:00.814 "write": true, 00:40:00.814 "unmap": true, 00:40:00.814 "flush": false, 00:40:00.814 "reset": true, 00:40:00.814 "nvme_admin": false, 00:40:00.814 "nvme_io": false, 00:40:00.814 "nvme_io_md": false, 00:40:00.814 "write_zeroes": true, 00:40:00.814 "zcopy": false, 00:40:00.814 "get_zone_info": false, 00:40:00.814 "zone_management": false, 00:40:00.814 "zone_append": false, 00:40:00.814 "compare": false, 00:40:00.814 "compare_and_write": false, 00:40:00.814 "abort": false, 00:40:00.814 "seek_hole": true, 00:40:00.814 "seek_data": true, 00:40:00.814 "copy": false, 00:40:00.814 "nvme_iov_md": false 00:40:00.814 }, 00:40:00.814 "driver_specific": { 00:40:00.814 "lvol": { 00:40:00.814 "lvol_store_uuid": "fb487434-25e5-4d32-bc66-1d993ea1f8c1", 00:40:00.814 "base_bdev": "nvme0n1", 00:40:00.814 "thin_provision": true, 00:40:00.814 "num_allocated_clusters": 0, 00:40:00.814 "snapshot": false, 00:40:00.814 "clone": false, 00:40:00.814 "esnap_clone": false 00:40:00.814 } 00:40:00.814 } 00:40:00.814 } 00:40:00.814 ]' 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:00.814 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:40:00.814 14:00:08 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:40:00.814 14:00:08 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:40:01.073 14:00:08 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:40:01.073 14:00:08 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:01.073 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:01.073 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:40:01.073 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:40:01.073 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:40:01.073 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b e06423a1-0605-4ba7-8f2c-001eb0c0c27a 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:40:01.333 { 00:40:01.333 "name": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:01.333 "aliases": [ 00:40:01.333 "lvs/nvme0n1p0" 00:40:01.333 ], 00:40:01.333 "product_name": "Logical Volume", 00:40:01.333 "block_size": 4096, 00:40:01.333 "num_blocks": 26476544, 00:40:01.333 "uuid": "e06423a1-0605-4ba7-8f2c-001eb0c0c27a", 00:40:01.333 "assigned_rate_limits": { 00:40:01.333 "rw_ios_per_sec": 0, 00:40:01.333 "rw_mbytes_per_sec": 0, 00:40:01.333 "r_mbytes_per_sec": 0, 00:40:01.333 "w_mbytes_per_sec": 0 00:40:01.333 }, 00:40:01.333 "claimed": false, 00:40:01.333 "zoned": false, 00:40:01.333 "supported_io_types": { 00:40:01.333 "read": true, 00:40:01.333 "write": true, 00:40:01.333 "unmap": true, 00:40:01.333 "flush": false, 00:40:01.333 "reset": true, 00:40:01.333 "nvme_admin": false, 00:40:01.333 "nvme_io": false, 00:40:01.333 "nvme_io_md": false, 00:40:01.333 "write_zeroes": true, 00:40:01.333 "zcopy": false, 00:40:01.333 "get_zone_info": false, 00:40:01.333 "zone_management": false, 00:40:01.333 "zone_append": false, 00:40:01.333 "compare": false, 00:40:01.333 "compare_and_write": false, 00:40:01.333 "abort": false, 00:40:01.333 "seek_hole": true, 00:40:01.333 "seek_data": true, 00:40:01.333 "copy": false, 00:40:01.333 "nvme_iov_md": false 00:40:01.333 }, 00:40:01.333 "driver_specific": { 00:40:01.333 "lvol": { 00:40:01.333 "lvol_store_uuid": "fb487434-25e5-4d32-bc66-1d993ea1f8c1", 00:40:01.333 "base_bdev": "nvme0n1", 00:40:01.333 "thin_provision": true, 00:40:01.333 "num_allocated_clusters": 0, 00:40:01.333 "snapshot": false, 00:40:01.333 "clone": false, 00:40:01.333 "esnap_clone": false 00:40:01.333 } 00:40:01.333 } 00:40:01.333 } 00:40:01.333 ]' 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:40:01.333 14:00:08 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d e06423a1-0605-4ba7-8f2c-001eb0c0c27a --l2p_dram_limit 10' 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:40:01.333 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:40:01.333 14:00:08 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d e06423a1-0605-4ba7-8f2c-001eb0c0c27a --l2p_dram_limit 10 -c nvc0n1p0 00:40:01.595 [2024-11-20 14:00:09.114959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.115009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:01.595 [2024-11-20 14:00:09.115027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:01.595 [2024-11-20 14:00:09.115036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.115111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.115124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:01.595 [2024-11-20 14:00:09.115135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:01.595 [2024-11-20 14:00:09.115142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.115164] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:01.595 [2024-11-20 14:00:09.116237] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:01.595 [2024-11-20 14:00:09.116273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.116282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:01.595 [2024-11-20 14:00:09.116293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.113 ms 00:40:01.595 [2024-11-20 14:00:09.116302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.116377] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9fa4affe-81c0-4583-92ea-789529719ac0 00:40:01.595 [2024-11-20 14:00:09.117791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.117826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:40:01.595 [2024-11-20 14:00:09.117836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:40:01.595 [2024-11-20 14:00:09.117845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.125141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.125268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:01.595 [2024-11-20 14:00:09.125282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.257 ms 00:40:01.595 [2024-11-20 14:00:09.125293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.125394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.125411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:01.595 [2024-11-20 14:00:09.125420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:40:01.595 [2024-11-20 14:00:09.125433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.125492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.125505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:01.595 [2024-11-20 14:00:09.125513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:01.595 [2024-11-20 14:00:09.125525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.125550] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:01.595 [2024-11-20 14:00:09.130683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.130713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:01.595 [2024-11-20 14:00:09.130737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:40:01.595 [2024-11-20 14:00:09.130745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.130778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.130788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:01.595 [2024-11-20 14:00:09.130798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:40:01.595 [2024-11-20 14:00:09.130806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.130844] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:40:01.595 [2024-11-20 14:00:09.130972] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:01.595 [2024-11-20 14:00:09.130989] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:01.595 [2024-11-20 14:00:09.131000] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:01.595 [2024-11-20 14:00:09.131011] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131021] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131031] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:01.595 [2024-11-20 14:00:09.131039] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:01.595 [2024-11-20 14:00:09.131049] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:01.595 [2024-11-20 14:00:09.131056] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:01.595 [2024-11-20 14:00:09.131065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.131074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:01.595 [2024-11-20 14:00:09.131083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.224 ms 00:40:01.595 [2024-11-20 14:00:09.131103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.131177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.595 [2024-11-20 14:00:09.131187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:01.595 [2024-11-20 14:00:09.131196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:40:01.595 [2024-11-20 14:00:09.131203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.595 [2024-11-20 14:00:09.131295] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:01.595 [2024-11-20 14:00:09.131307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:01.595 [2024-11-20 14:00:09.131317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:01.595 [2024-11-20 14:00:09.131341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:01.595 [2024-11-20 14:00:09.131364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:01.595 [2024-11-20 14:00:09.131380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:01.595 [2024-11-20 14:00:09.131387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:01.595 [2024-11-20 14:00:09.131395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:01.595 [2024-11-20 14:00:09.131401] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:01.595 [2024-11-20 14:00:09.131409] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:01.595 [2024-11-20 14:00:09.131415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:01.595 [2024-11-20 14:00:09.131430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131439] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131447] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:01.595 [2024-11-20 14:00:09.131454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:01.595 [2024-11-20 14:00:09.131474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:01.595 [2024-11-20 14:00:09.131496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131502] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:01.595 [2024-11-20 14:00:09.131532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:01.595 [2024-11-20 14:00:09.131546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:01.595 [2024-11-20 14:00:09.131556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:01.595 [2024-11-20 14:00:09.131571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:01.595 [2024-11-20 14:00:09.131577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:01.595 [2024-11-20 14:00:09.131586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:01.595 [2024-11-20 14:00:09.131592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:01.595 [2024-11-20 14:00:09.131600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:01.595 [2024-11-20 14:00:09.131606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:01.595 [2024-11-20 14:00:09.131632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:01.595 [2024-11-20 14:00:09.131642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.595 [2024-11-20 14:00:09.131647] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:01.595 [2024-11-20 14:00:09.131661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:01.596 [2024-11-20 14:00:09.131668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:01.596 [2024-11-20 14:00:09.131678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:01.596 [2024-11-20 14:00:09.131686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:01.596 [2024-11-20 14:00:09.131696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:01.596 [2024-11-20 14:00:09.131703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:01.596 [2024-11-20 14:00:09.131712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:01.596 [2024-11-20 14:00:09.131730] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:01.596 [2024-11-20 14:00:09.131738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:01.596 [2024-11-20 14:00:09.131748] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:01.596 [2024-11-20 14:00:09.131761] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:01.596 [2024-11-20 14:00:09.131781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:01.596 [2024-11-20 14:00:09.131788] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:01.596 [2024-11-20 14:00:09.131798] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:01.596 [2024-11-20 14:00:09.131805] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:01.596 [2024-11-20 14:00:09.131813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:01.596 [2024-11-20 14:00:09.131820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:01.596 [2024-11-20 14:00:09.131843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:01.596 [2024-11-20 14:00:09.131851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:01.596 [2024-11-20 14:00:09.131862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:01.596 [2024-11-20 14:00:09.131913] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:01.596 [2024-11-20 14:00:09.131923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:01.596 [2024-11-20 14:00:09.131940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:01.596 [2024-11-20 14:00:09.131947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:01.596 [2024-11-20 14:00:09.131956] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:01.596 [2024-11-20 14:00:09.131963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:01.596 [2024-11-20 14:00:09.131974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:01.596 [2024-11-20 14:00:09.131982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.725 ms 00:40:01.596 [2024-11-20 14:00:09.131992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:01.596 [2024-11-20 14:00:09.132032] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:40:01.596 [2024-11-20 14:00:09.132046] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:40:05.804 [2024-11-20 14:00:12.622188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.622254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:40:05.804 [2024-11-20 14:00:12.622270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3496.883 ms 00:40:05.804 [2024-11-20 14:00:12.622281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.660326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.660384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:05.804 [2024-11-20 14:00:12.660398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.815 ms 00:40:05.804 [2024-11-20 14:00:12.660409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.660577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.660592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:05.804 [2024-11-20 14:00:12.660601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:40:05.804 [2024-11-20 14:00:12.660618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.706485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.706642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:05.804 [2024-11-20 14:00:12.706661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.902 ms 00:40:05.804 [2024-11-20 14:00:12.706671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.706741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.706759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:05.804 [2024-11-20 14:00:12.706784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:40:05.804 [2024-11-20 14:00:12.706795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.707278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.707305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:05.804 [2024-11-20 14:00:12.707315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:40:05.804 [2024-11-20 14:00:12.707325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.707436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.707449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:05.804 [2024-11-20 14:00:12.707461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:40:05.804 [2024-11-20 14:00:12.707476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.727100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.727149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:05.804 [2024-11-20 14:00:12.727162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.641 ms 00:40:05.804 [2024-11-20 14:00:12.727173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.758033] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:05.804 [2024-11-20 14:00:12.761542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.761579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:05.804 [2024-11-20 14:00:12.761594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.314 ms 00:40:05.804 [2024-11-20 14:00:12.761603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.848344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.848407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:40:05.804 [2024-11-20 14:00:12.848423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.852 ms 00:40:05.804 [2024-11-20 14:00:12.848432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.848618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.848633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:05.804 [2024-11-20 14:00:12.848646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:40:05.804 [2024-11-20 14:00:12.848653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.883726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.883768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:40:05.804 [2024-11-20 14:00:12.883784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.090 ms 00:40:05.804 [2024-11-20 14:00:12.883792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.918127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.918160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:40:05.804 [2024-11-20 14:00:12.918173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.356 ms 00:40:05.804 [2024-11-20 14:00:12.918181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:12.918908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:12.918924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:05.804 [2024-11-20 14:00:12.918935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:40:05.804 [2024-11-20 14:00:12.918946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:13.021792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:13.021839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:40:05.804 [2024-11-20 14:00:13.021858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 102.994 ms 00:40:05.804 [2024-11-20 14:00:13.021866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:13.057734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:13.057778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:40:05.804 [2024-11-20 14:00:13.057794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.859 ms 00:40:05.804 [2024-11-20 14:00:13.057801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:13.093282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:13.093318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:40:05.804 [2024-11-20 14:00:13.093332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.506 ms 00:40:05.804 [2024-11-20 14:00:13.093340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:13.128427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:13.128462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:05.804 [2024-11-20 14:00:13.128475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.114 ms 00:40:05.804 [2024-11-20 14:00:13.128483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.804 [2024-11-20 14:00:13.128525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.804 [2024-11-20 14:00:13.128535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:05.804 [2024-11-20 14:00:13.128549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:05.805 [2024-11-20 14:00:13.128557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.805 [2024-11-20 14:00:13.128654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:05.805 [2024-11-20 14:00:13.128664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:05.805 [2024-11-20 14:00:13.128678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:40:05.805 [2024-11-20 14:00:13.128685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:05.805 [2024-11-20 14:00:13.129834] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4022.141 ms, result 0 00:40:05.805 { 00:40:05.805 "name": "ftl0", 00:40:05.805 "uuid": "9fa4affe-81c0-4583-92ea-789529719ac0" 00:40:05.805 } 00:40:05.805 14:00:13 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:40:05.805 14:00:13 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:40:05.805 14:00:13 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:40:05.805 14:00:13 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:40:06.066 [2024-11-20 14:00:13.552340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.552400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:06.066 [2024-11-20 14:00:13.552421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:06.066 [2024-11-20 14:00:13.552441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.552467] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:06.066 [2024-11-20 14:00:13.556657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.556766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:06.066 [2024-11-20 14:00:13.556793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.177 ms 00:40:06.066 [2024-11-20 14:00:13.556803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.557101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.557119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:06.066 [2024-11-20 14:00:13.557131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:40:06.066 [2024-11-20 14:00:13.557141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.559816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.559879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:06.066 [2024-11-20 14:00:13.559896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.661 ms 00:40:06.066 [2024-11-20 14:00:13.559905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.564944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.564976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:06.066 [2024-11-20 14:00:13.564991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.020 ms 00:40:06.066 [2024-11-20 14:00:13.564999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.601365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.601424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:06.066 [2024-11-20 14:00:13.601441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.363 ms 00:40:06.066 [2024-11-20 14:00:13.601449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.623336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.623379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:06.066 [2024-11-20 14:00:13.623395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.876 ms 00:40:06.066 [2024-11-20 14:00:13.623404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.623561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.623574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:06.066 [2024-11-20 14:00:13.623587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:40:06.066 [2024-11-20 14:00:13.623595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.661143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.661243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:06.066 [2024-11-20 14:00:13.661279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.584 ms 00:40:06.066 [2024-11-20 14:00:13.661299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.696174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.696254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:06.066 [2024-11-20 14:00:13.696289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.883 ms 00:40:06.066 [2024-11-20 14:00:13.696310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.730419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.730496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:06.066 [2024-11-20 14:00:13.730527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.117 ms 00:40:06.066 [2024-11-20 14:00:13.730547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.765475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.066 [2024-11-20 14:00:13.765571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:06.066 [2024-11-20 14:00:13.765606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.874 ms 00:40:06.066 [2024-11-20 14:00:13.765627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.066 [2024-11-20 14:00:13.765687] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:06.066 [2024-11-20 14:00:13.765707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:06.066 [2024-11-20 14:00:13.765900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.765994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:06.067 [2024-11-20 14:00:13.766641] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:06.067 [2024-11-20 14:00:13.766654] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fa4affe-81c0-4583-92ea-789529719ac0 00:40:06.067 [2024-11-20 14:00:13.766662] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:06.067 [2024-11-20 14:00:13.766673] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:06.067 [2024-11-20 14:00:13.766680] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:06.067 [2024-11-20 14:00:13.766692] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:06.067 [2024-11-20 14:00:13.766701] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:06.067 [2024-11-20 14:00:13.766711] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:06.067 [2024-11-20 14:00:13.766719] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:06.067 [2024-11-20 14:00:13.766727] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:06.067 [2024-11-20 14:00:13.766744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:06.067 [2024-11-20 14:00:13.766754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.067 [2024-11-20 14:00:13.766762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:06.068 [2024-11-20 14:00:13.766774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.072 ms 00:40:06.068 [2024-11-20 14:00:13.766782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.787845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.328 [2024-11-20 14:00:13.787953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:06.328 [2024-11-20 14:00:13.787985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.041 ms 00:40:06.328 [2024-11-20 14:00:13.787994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.788576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:06.328 [2024-11-20 14:00:13.788592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:06.328 [2024-11-20 14:00:13.788607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:40:06.328 [2024-11-20 14:00:13.788615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.854172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.328 [2024-11-20 14:00:13.854221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:06.328 [2024-11-20 14:00:13.854235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.328 [2024-11-20 14:00:13.854243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.854323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.328 [2024-11-20 14:00:13.854331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:06.328 [2024-11-20 14:00:13.854345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.328 [2024-11-20 14:00:13.854352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.854476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.328 [2024-11-20 14:00:13.854488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:06.328 [2024-11-20 14:00:13.854500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.328 [2024-11-20 14:00:13.854507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.854530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.328 [2024-11-20 14:00:13.854539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:06.328 [2024-11-20 14:00:13.854549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.328 [2024-11-20 14:00:13.854557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.328 [2024-11-20 14:00:13.976752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.328 [2024-11-20 14:00:13.976866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:06.328 [2024-11-20 14:00:13.976886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.328 [2024-11-20 14:00:13.976895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:06.588 [2024-11-20 14:00:14.072371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:06.588 [2024-11-20 14:00:14.072529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:06.588 [2024-11-20 14:00:14.072605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:06.588 [2024-11-20 14:00:14.072765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:06.588 [2024-11-20 14:00:14.072838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.588 [2024-11-20 14:00:14.072900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:06.588 [2024-11-20 14:00:14.072910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.588 [2024-11-20 14:00:14.072918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.588 [2024-11-20 14:00:14.072968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:06.589 [2024-11-20 14:00:14.072977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:06.589 [2024-11-20 14:00:14.072987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:06.589 [2024-11-20 14:00:14.072995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:06.589 [2024-11-20 14:00:14.073133] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 521.766 ms, result 0 00:40:06.589 true 00:40:06.589 14:00:14 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79906 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79906 ']' 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79906 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79906 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79906' 00:40:06.589 killing process with pid 79906 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79906 00:40:06.589 14:00:14 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79906 00:40:16.575 14:00:23 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:40:19.867 262144+0 records in 00:40:19.867 262144+0 records out 00:40:19.867 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.74759 s, 287 MB/s 00:40:19.867 14:00:27 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:40:21.775 14:00:28 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:40:21.775 [2024-11-20 14:00:29.059983] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:40:21.775 [2024-11-20 14:00:29.060104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80210 ] 00:40:21.775 [2024-11-20 14:00:29.241203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:21.775 [2024-11-20 14:00:29.368777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:22.034 [2024-11-20 14:00:29.737800] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:22.034 [2024-11-20 14:00:29.737947] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:22.293 [2024-11-20 14:00:29.898569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.898712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:22.293 [2024-11-20 14:00:29.898746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:40:22.293 [2024-11-20 14:00:29.898755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.293 [2024-11-20 14:00:29.898821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.898832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:22.293 [2024-11-20 14:00:29.898846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:40:22.293 [2024-11-20 14:00:29.898854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.293 [2024-11-20 14:00:29.898875] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:22.293 [2024-11-20 14:00:29.899922] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:22.293 [2024-11-20 14:00:29.899953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.899964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:22.293 [2024-11-20 14:00:29.899974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.086 ms 00:40:22.293 [2024-11-20 14:00:29.899985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.293 [2024-11-20 14:00:29.901474] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:22.293 [2024-11-20 14:00:29.920855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.920986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:22.293 [2024-11-20 14:00:29.921013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.419 ms 00:40:22.293 [2024-11-20 14:00:29.921021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.293 [2024-11-20 14:00:29.921116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.921126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:22.293 [2024-11-20 14:00:29.921135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:40:22.293 [2024-11-20 14:00:29.921143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.293 [2024-11-20 14:00:29.928318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.293 [2024-11-20 14:00:29.928362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:22.293 [2024-11-20 14:00:29.928373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.111 ms 00:40:22.294 [2024-11-20 14:00:29.928392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.928497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.928512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:22.294 [2024-11-20 14:00:29.928521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:40:22.294 [2024-11-20 14:00:29.928528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.928579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.928589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:22.294 [2024-11-20 14:00:29.928597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:22.294 [2024-11-20 14:00:29.928604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.928636] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:22.294 [2024-11-20 14:00:29.933290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.933320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:22.294 [2024-11-20 14:00:29.933330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:40:22.294 [2024-11-20 14:00:29.933343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.933372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.933380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:22.294 [2024-11-20 14:00:29.933388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:22.294 [2024-11-20 14:00:29.933396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.933447] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:22.294 [2024-11-20 14:00:29.933474] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:22.294 [2024-11-20 14:00:29.933509] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:22.294 [2024-11-20 14:00:29.933528] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:22.294 [2024-11-20 14:00:29.933616] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:22.294 [2024-11-20 14:00:29.933627] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:22.294 [2024-11-20 14:00:29.933636] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:22.294 [2024-11-20 14:00:29.933646] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:22.294 [2024-11-20 14:00:29.933654] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:22.294 [2024-11-20 14:00:29.933663] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:22.294 [2024-11-20 14:00:29.933670] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:22.294 [2024-11-20 14:00:29.933678] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:22.294 [2024-11-20 14:00:29.933691] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:22.294 [2024-11-20 14:00:29.933700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.933708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:22.294 [2024-11-20 14:00:29.933733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.256 ms 00:40:22.294 [2024-11-20 14:00:29.933751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.933819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.933829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:22.294 [2024-11-20 14:00:29.933837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:40:22.294 [2024-11-20 14:00:29.933845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.933959] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:22.294 [2024-11-20 14:00:29.933974] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:22.294 [2024-11-20 14:00:29.933983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:22.294 [2024-11-20 14:00:29.933990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.933999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:22.294 [2024-11-20 14:00:29.934017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934032] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:22.294 [2024-11-20 14:00:29.934040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:22.294 [2024-11-20 14:00:29.934054] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:22.294 [2024-11-20 14:00:29.934060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:22.294 [2024-11-20 14:00:29.934067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:22.294 [2024-11-20 14:00:29.934073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:22.294 [2024-11-20 14:00:29.934079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:22.294 [2024-11-20 14:00:29.934097] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:22.294 [2024-11-20 14:00:29.934111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:22.294 [2024-11-20 14:00:29.934131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:22.294 [2024-11-20 14:00:29.934149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:22.294 [2024-11-20 14:00:29.934167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:22.294 [2024-11-20 14:00:29.934185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:22.294 [2024-11-20 14:00:29.934204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:22.294 [2024-11-20 14:00:29.934216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:22.294 [2024-11-20 14:00:29.934222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:22.294 [2024-11-20 14:00:29.934228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:22.294 [2024-11-20 14:00:29.934234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:22.294 [2024-11-20 14:00:29.934241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:22.294 [2024-11-20 14:00:29.934246] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:22.294 [2024-11-20 14:00:29.934259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:22.294 [2024-11-20 14:00:29.934265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934271] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:22.294 [2024-11-20 14:00:29.934278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:22.294 [2024-11-20 14:00:29.934284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:22.294 [2024-11-20 14:00:29.934298] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:22.294 [2024-11-20 14:00:29.934304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:22.294 [2024-11-20 14:00:29.934310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:22.294 [2024-11-20 14:00:29.934316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:22.294 [2024-11-20 14:00:29.934322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:22.294 [2024-11-20 14:00:29.934330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:22.294 [2024-11-20 14:00:29.934339] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:22.294 [2024-11-20 14:00:29.934347] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934356] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:22.294 [2024-11-20 14:00:29.934363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:22.294 [2024-11-20 14:00:29.934370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:22.294 [2024-11-20 14:00:29.934377] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:22.294 [2024-11-20 14:00:29.934383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:22.294 [2024-11-20 14:00:29.934390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:22.294 [2024-11-20 14:00:29.934397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:22.294 [2024-11-20 14:00:29.934404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:22.294 [2024-11-20 14:00:29.934410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:22.294 [2024-11-20 14:00:29.934416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:22.294 [2024-11-20 14:00:29.934450] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:22.294 [2024-11-20 14:00:29.934464] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934472] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:22.294 [2024-11-20 14:00:29.934479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:22.294 [2024-11-20 14:00:29.934487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:22.294 [2024-11-20 14:00:29.934494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:22.294 [2024-11-20 14:00:29.934502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.934510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:22.294 [2024-11-20 14:00:29.934517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:40:22.294 [2024-11-20 14:00:29.934524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.294 [2024-11-20 14:00:29.973905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.294 [2024-11-20 14:00:29.974040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:22.295 [2024-11-20 14:00:29.974058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.405 ms 00:40:22.295 [2024-11-20 14:00:29.974066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.295 [2024-11-20 14:00:29.974175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.295 [2024-11-20 14:00:29.974185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:22.295 [2024-11-20 14:00:29.974194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:40:22.295 [2024-11-20 14:00:29.974202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.031518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.031673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:22.554 [2024-11-20 14:00:30.031692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.347 ms 00:40:22.554 [2024-11-20 14:00:30.031701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.031782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.031791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:22.554 [2024-11-20 14:00:30.031808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:40:22.554 [2024-11-20 14:00:30.031817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.032324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.032342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:22.554 [2024-11-20 14:00:30.032351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:40:22.554 [2024-11-20 14:00:30.032359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.032479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.032495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:22.554 [2024-11-20 14:00:30.032503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:40:22.554 [2024-11-20 14:00:30.032519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.051566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.051631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:22.554 [2024-11-20 14:00:30.051668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.061 ms 00:40:22.554 [2024-11-20 14:00:30.051676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.071206] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:40:22.554 [2024-11-20 14:00:30.071257] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:22.554 [2024-11-20 14:00:30.071271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.071281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:22.554 [2024-11-20 14:00:30.071291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.471 ms 00:40:22.554 [2024-11-20 14:00:30.071299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.102351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.102451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:22.554 [2024-11-20 14:00:30.102467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.046 ms 00:40:22.554 [2024-11-20 14:00:30.102476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.121826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.121900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:22.554 [2024-11-20 14:00:30.121913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.289 ms 00:40:22.554 [2024-11-20 14:00:30.121921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.140725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.140784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:22.554 [2024-11-20 14:00:30.140798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.778 ms 00:40:22.554 [2024-11-20 14:00:30.140807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.141648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.554 [2024-11-20 14:00:30.141674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:22.554 [2024-11-20 14:00:30.141684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:40:22.554 [2024-11-20 14:00:30.141692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.554 [2024-11-20 14:00:30.229307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.229372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:40:22.555 [2024-11-20 14:00:30.229385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.756 ms 00:40:22.555 [2024-11-20 14:00:30.229408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.555 [2024-11-20 14:00:30.241524] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:40:22.555 [2024-11-20 14:00:30.244840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.244924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:40:22.555 [2024-11-20 14:00:30.244957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.382 ms 00:40:22.555 [2024-11-20 14:00:30.244978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.555 [2024-11-20 14:00:30.245116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.245164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:40:22.555 [2024-11-20 14:00:30.245206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:40:22.555 [2024-11-20 14:00:30.245226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.555 [2024-11-20 14:00:30.245344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.245382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:40:22.555 [2024-11-20 14:00:30.245414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:40:22.555 [2024-11-20 14:00:30.245433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.555 [2024-11-20 14:00:30.245502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.245540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:40:22.555 [2024-11-20 14:00:30.245551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:40:22.555 [2024-11-20 14:00:30.245559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.555 [2024-11-20 14:00:30.245598] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:40:22.555 [2024-11-20 14:00:30.245609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.555 [2024-11-20 14:00:30.245623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:40:22.555 [2024-11-20 14:00:30.245631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:40:22.555 [2024-11-20 14:00:30.245639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.813 [2024-11-20 14:00:30.283256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.813 [2024-11-20 14:00:30.283319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:40:22.813 [2024-11-20 14:00:30.283332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.664 ms 00:40:22.813 [2024-11-20 14:00:30.283340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.813 [2024-11-20 14:00:30.283463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:22.813 [2024-11-20 14:00:30.283474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:40:22.813 [2024-11-20 14:00:30.283483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:40:22.813 [2024-11-20 14:00:30.283490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:22.813 [2024-11-20 14:00:30.284934] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 386.538 ms, result 0 00:40:23.759  [2024-11-20T14:00:32.417Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T14:00:33.358Z] Copying: 58/1024 [MB] (29 MBps) [2024-11-20T14:00:34.297Z] Copying: 87/1024 [MB] (29 MBps) [2024-11-20T14:00:35.676Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-20T14:00:36.615Z] Copying: 149/1024 [MB] (31 MBps) [2024-11-20T14:00:37.554Z] Copying: 180/1024 [MB] (31 MBps) [2024-11-20T14:00:38.491Z] Copying: 210/1024 [MB] (30 MBps) [2024-11-20T14:00:39.430Z] Copying: 241/1024 [MB] (30 MBps) [2024-11-20T14:00:40.369Z] Copying: 271/1024 [MB] (30 MBps) [2024-11-20T14:00:41.308Z] Copying: 301/1024 [MB] (29 MBps) [2024-11-20T14:00:42.688Z] Copying: 331/1024 [MB] (29 MBps) [2024-11-20T14:00:43.628Z] Copying: 360/1024 [MB] (29 MBps) [2024-11-20T14:00:44.566Z] Copying: 391/1024 [MB] (30 MBps) [2024-11-20T14:00:45.505Z] Copying: 421/1024 [MB] (30 MBps) [2024-11-20T14:00:46.444Z] Copying: 452/1024 [MB] (31 MBps) [2024-11-20T14:00:47.382Z] Copying: 483/1024 [MB] (31 MBps) [2024-11-20T14:00:48.329Z] Copying: 515/1024 [MB] (31 MBps) [2024-11-20T14:00:49.280Z] Copying: 545/1024 [MB] (30 MBps) [2024-11-20T14:00:50.658Z] Copying: 575/1024 [MB] (29 MBps) [2024-11-20T14:00:51.596Z] Copying: 606/1024 [MB] (31 MBps) [2024-11-20T14:00:52.535Z] Copying: 636/1024 [MB] (29 MBps) [2024-11-20T14:00:53.473Z] Copying: 666/1024 [MB] (30 MBps) [2024-11-20T14:00:54.410Z] Copying: 697/1024 [MB] (30 MBps) [2024-11-20T14:00:55.348Z] Copying: 728/1024 [MB] (31 MBps) [2024-11-20T14:00:56.287Z] Copying: 758/1024 [MB] (30 MBps) [2024-11-20T14:00:57.678Z] Copying: 790/1024 [MB] (31 MBps) [2024-11-20T14:00:58.245Z] Copying: 820/1024 [MB] (30 MBps) [2024-11-20T14:00:59.621Z] Copying: 851/1024 [MB] (30 MBps) [2024-11-20T14:01:00.560Z] Copying: 881/1024 [MB] (30 MBps) [2024-11-20T14:01:01.498Z] Copying: 911/1024 [MB] (30 MBps) [2024-11-20T14:01:02.437Z] Copying: 942/1024 [MB] (30 MBps) [2024-11-20T14:01:03.376Z] Copying: 973/1024 [MB] (30 MBps) [2024-11-20T14:01:03.980Z] Copying: 1004/1024 [MB] (31 MBps) [2024-11-20T14:01:03.980Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 14:01:03.854930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.855004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:40:56.261 [2024-11-20 14:01:03.855019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:56.261 [2024-11-20 14:01:03.855029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.855061] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:40:56.261 [2024-11-20 14:01:03.859313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.859356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:40:56.261 [2024-11-20 14:01:03.859367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.243 ms 00:40:56.261 [2024-11-20 14:01:03.859384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.861532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.861649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:40:56.261 [2024-11-20 14:01:03.861665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.125 ms 00:40:56.261 [2024-11-20 14:01:03.861674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.878766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.878803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:40:56.261 [2024-11-20 14:01:03.878813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.107 ms 00:40:56.261 [2024-11-20 14:01:03.878822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.883835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.883868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:40:56.261 [2024-11-20 14:01:03.883878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.986 ms 00:40:56.261 [2024-11-20 14:01:03.883886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.919600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.919746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:40:56.261 [2024-11-20 14:01:03.919762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.702 ms 00:40:56.261 [2024-11-20 14:01:03.919770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.940762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.940799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:40:56.261 [2024-11-20 14:01:03.940811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.998 ms 00:40:56.261 [2024-11-20 14:01:03.940818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.940942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.940956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:40:56.261 [2024-11-20 14:01:03.940972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:40:56.261 [2024-11-20 14:01:03.940979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.261 [2024-11-20 14:01:03.977859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.261 [2024-11-20 14:01:03.977898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:40:56.261 [2024-11-20 14:01:03.977909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.936 ms 00:40:56.261 [2024-11-20 14:01:03.977918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.523 [2024-11-20 14:01:04.014069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.523 [2024-11-20 14:01:04.014107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:40:56.523 [2024-11-20 14:01:04.014132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.184 ms 00:40:56.523 [2024-11-20 14:01:04.014139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.523 [2024-11-20 14:01:04.050255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.523 [2024-11-20 14:01:04.050293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:40:56.523 [2024-11-20 14:01:04.050304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.150 ms 00:40:56.523 [2024-11-20 14:01:04.050312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.523 [2024-11-20 14:01:04.087162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.523 [2024-11-20 14:01:04.087204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:40:56.523 [2024-11-20 14:01:04.087215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.852 ms 00:40:56.523 [2024-11-20 14:01:04.087223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.523 [2024-11-20 14:01:04.087259] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:40:56.523 [2024-11-20 14:01:04.087276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:40:56.523 [2024-11-20 14:01:04.087546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.087987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:40:56.524 [2024-11-20 14:01:04.088190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:40:56.524 [2024-11-20 14:01:04.088203] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fa4affe-81c0-4583-92ea-789529719ac0 00:40:56.524 [2024-11-20 14:01:04.088214] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:40:56.524 [2024-11-20 14:01:04.088223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:40:56.524 [2024-11-20 14:01:04.088237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:40:56.524 [2024-11-20 14:01:04.088246] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:40:56.524 [2024-11-20 14:01:04.088253] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:40:56.524 [2024-11-20 14:01:04.088262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:40:56.524 [2024-11-20 14:01:04.088271] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:40:56.524 [2024-11-20 14:01:04.088293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:40:56.524 [2024-11-20 14:01:04.088301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:40:56.524 [2024-11-20 14:01:04.088309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.524 [2024-11-20 14:01:04.088318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:40:56.524 [2024-11-20 14:01:04.088328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.054 ms 00:40:56.524 [2024-11-20 14:01:04.088336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.524 [2024-11-20 14:01:04.109302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.524 [2024-11-20 14:01:04.109422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:40:56.524 [2024-11-20 14:01:04.109436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.973 ms 00:40:56.524 [2024-11-20 14:01:04.109444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.524 [2024-11-20 14:01:04.109996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:56.524 [2024-11-20 14:01:04.110008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:40:56.524 [2024-11-20 14:01:04.110017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:40:56.524 [2024-11-20 14:01:04.110025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.524 [2024-11-20 14:01:04.162122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.524 [2024-11-20 14:01:04.162177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:56.524 [2024-11-20 14:01:04.162189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.524 [2024-11-20 14:01:04.162196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.524 [2024-11-20 14:01:04.162257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.525 [2024-11-20 14:01:04.162266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:56.525 [2024-11-20 14:01:04.162273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.525 [2024-11-20 14:01:04.162280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.525 [2024-11-20 14:01:04.162351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.525 [2024-11-20 14:01:04.162363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:56.525 [2024-11-20 14:01:04.162371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.525 [2024-11-20 14:01:04.162378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.525 [2024-11-20 14:01:04.162393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.525 [2024-11-20 14:01:04.162401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:56.525 [2024-11-20 14:01:04.162408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.525 [2024-11-20 14:01:04.162415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.783 [2024-11-20 14:01:04.285852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.783 [2024-11-20 14:01:04.285918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:56.783 [2024-11-20 14:01:04.285932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.783 [2024-11-20 14:01:04.285940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.783 [2024-11-20 14:01:04.387217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.783 [2024-11-20 14:01:04.387281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:56.783 [2024-11-20 14:01:04.387294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.783 [2024-11-20 14:01:04.387301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.783 [2024-11-20 14:01:04.387418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.783 [2024-11-20 14:01:04.387430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:56.784 [2024-11-20 14:01:04.387438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.387487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.784 [2024-11-20 14:01:04.387496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:56.784 [2024-11-20 14:01:04.387505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.387604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.784 [2024-11-20 14:01:04.387621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:56.784 [2024-11-20 14:01:04.387629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.387681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.784 [2024-11-20 14:01:04.387692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:40:56.784 [2024-11-20 14:01:04.387700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.387764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.784 [2024-11-20 14:01:04.387778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:56.784 [2024-11-20 14:01:04.387787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.387835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:40:56.784 [2024-11-20 14:01:04.387845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:56.784 [2024-11-20 14:01:04.387853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:40:56.784 [2024-11-20 14:01:04.387860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:56.784 [2024-11-20 14:01:04.388013] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 534.064 ms, result 0 00:40:58.691 00:40:58.691 00:40:58.691 14:01:06 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:40:58.691 [2024-11-20 14:01:06.391398] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:40:58.691 [2024-11-20 14:01:06.391533] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80585 ] 00:40:58.951 [2024-11-20 14:01:06.566653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:59.215 [2024-11-20 14:01:06.681183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:59.477 [2024-11-20 14:01:07.034451] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:59.477 [2024-11-20 14:01:07.034521] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:40:59.477 [2024-11-20 14:01:07.189820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.477 [2024-11-20 14:01:07.189880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:40:59.477 [2024-11-20 14:01:07.189896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:40:59.477 [2024-11-20 14:01:07.189905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.477 [2024-11-20 14:01:07.189956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.477 [2024-11-20 14:01:07.189967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:40:59.477 [2024-11-20 14:01:07.189977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:40:59.477 [2024-11-20 14:01:07.189985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.477 [2024-11-20 14:01:07.190003] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:40:59.477 [2024-11-20 14:01:07.191037] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:40:59.477 [2024-11-20 14:01:07.191060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.477 [2024-11-20 14:01:07.191068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:40:59.477 [2024-11-20 14:01:07.191078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.063 ms 00:40:59.477 [2024-11-20 14:01:07.191086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.477 [2024-11-20 14:01:07.192543] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:40:59.739 [2024-11-20 14:01:07.211788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.211827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:40:59.739 [2024-11-20 14:01:07.211840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.283 ms 00:40:59.739 [2024-11-20 14:01:07.211848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.211919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.211929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:40:59.739 [2024-11-20 14:01:07.211938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:40:59.739 [2024-11-20 14:01:07.211945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.218729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.218855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:40:59.739 [2024-11-20 14:01:07.218869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.728 ms 00:40:59.739 [2024-11-20 14:01:07.218881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.218960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.218974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:40:59.739 [2024-11-20 14:01:07.218983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:40:59.739 [2024-11-20 14:01:07.218991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.219037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.219054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:40:59.739 [2024-11-20 14:01:07.219063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:40:59.739 [2024-11-20 14:01:07.219070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.219101] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:40:59.739 [2024-11-20 14:01:07.223901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.223932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:40:59.739 [2024-11-20 14:01:07.223941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:40:59.739 [2024-11-20 14:01:07.223951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.223980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.223988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:40:59.739 [2024-11-20 14:01:07.223996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:40:59.739 [2024-11-20 14:01:07.224003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.224047] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:40:59.739 [2024-11-20 14:01:07.224067] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:40:59.739 [2024-11-20 14:01:07.224101] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:40:59.739 [2024-11-20 14:01:07.224120] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:40:59.739 [2024-11-20 14:01:07.224207] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:40:59.739 [2024-11-20 14:01:07.224217] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:40:59.739 [2024-11-20 14:01:07.224228] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:40:59.739 [2024-11-20 14:01:07.224239] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224257] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:40:59.739 [2024-11-20 14:01:07.224265] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:40:59.739 [2024-11-20 14:01:07.224272] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:40:59.739 [2024-11-20 14:01:07.224282] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:40:59.739 [2024-11-20 14:01:07.224290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.224298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:40:59.739 [2024-11-20 14:01:07.224306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:40:59.739 [2024-11-20 14:01:07.224313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.224380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.739 [2024-11-20 14:01:07.224389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:40:59.739 [2024-11-20 14:01:07.224397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:40:59.739 [2024-11-20 14:01:07.224404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.739 [2024-11-20 14:01:07.224497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:40:59.739 [2024-11-20 14:01:07.224510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:40:59.739 [2024-11-20 14:01:07.224519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:40:59.739 [2024-11-20 14:01:07.224543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224551] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:40:59.739 [2024-11-20 14:01:07.224566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:59.739 [2024-11-20 14:01:07.224580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:40:59.739 [2024-11-20 14:01:07.224586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:40:59.739 [2024-11-20 14:01:07.224593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:40:59.739 [2024-11-20 14:01:07.224600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:40:59.739 [2024-11-20 14:01:07.224606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:40:59.739 [2024-11-20 14:01:07.224622] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:40:59.739 [2024-11-20 14:01:07.224636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224649] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:40:59.739 [2024-11-20 14:01:07.224655] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:40:59.739 [2024-11-20 14:01:07.224674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:40:59.739 [2024-11-20 14:01:07.224693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:40:59.739 [2024-11-20 14:01:07.224712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:40:59.739 [2024-11-20 14:01:07.224745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:40:59.739 [2024-11-20 14:01:07.224752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:59.739 [2024-11-20 14:01:07.224764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:40:59.739 [2024-11-20 14:01:07.224771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:40:59.739 [2024-11-20 14:01:07.224777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:40:59.739 [2024-11-20 14:01:07.224784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:40:59.739 [2024-11-20 14:01:07.224791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:40:59.739 [2024-11-20 14:01:07.224799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:40:59.739 [2024-11-20 14:01:07.224812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:40:59.739 [2024-11-20 14:01:07.224819] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.739 [2024-11-20 14:01:07.224825] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:40:59.740 [2024-11-20 14:01:07.224832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:40:59.740 [2024-11-20 14:01:07.224839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:40:59.740 [2024-11-20 14:01:07.224845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:40:59.740 [2024-11-20 14:01:07.224852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:40:59.740 [2024-11-20 14:01:07.224859] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:40:59.740 [2024-11-20 14:01:07.224865] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:40:59.740 [2024-11-20 14:01:07.224871] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:40:59.740 [2024-11-20 14:01:07.224877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:40:59.740 [2024-11-20 14:01:07.224884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:40:59.740 [2024-11-20 14:01:07.224893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:40:59.740 [2024-11-20 14:01:07.224903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.224924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:40:59.740 [2024-11-20 14:01:07.224932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:40:59.740 [2024-11-20 14:01:07.224940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:40:59.740 [2024-11-20 14:01:07.224947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:40:59.740 [2024-11-20 14:01:07.224954] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:40:59.740 [2024-11-20 14:01:07.224961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:40:59.740 [2024-11-20 14:01:07.224968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:40:59.740 [2024-11-20 14:01:07.224975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:40:59.740 [2024-11-20 14:01:07.224981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:40:59.740 [2024-11-20 14:01:07.224989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.224995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.225002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.225008] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.225015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:40:59.740 [2024-11-20 14:01:07.225022] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:40:59.740 [2024-11-20 14:01:07.225033] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.225040] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:40:59.740 [2024-11-20 14:01:07.225047] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:40:59.740 [2024-11-20 14:01:07.225055] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:40:59.740 [2024-11-20 14:01:07.225062] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:40:59.740 [2024-11-20 14:01:07.225069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.225077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:40:59.740 [2024-11-20 14:01:07.225084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.627 ms 00:40:59.740 [2024-11-20 14:01:07.225091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.262350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.262490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:40:59.740 [2024-11-20 14:01:07.262508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.281 ms 00:40:59.740 [2024-11-20 14:01:07.262516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.262619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.262628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:40:59.740 [2024-11-20 14:01:07.262636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:40:59.740 [2024-11-20 14:01:07.262644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.317987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.318116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:40:59.740 [2024-11-20 14:01:07.318133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.356 ms 00:40:59.740 [2024-11-20 14:01:07.318142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.318205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.318213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:40:59.740 [2024-11-20 14:01:07.318226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:40:59.740 [2024-11-20 14:01:07.318233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.318712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.318742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:40:59.740 [2024-11-20 14:01:07.318751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:40:59.740 [2024-11-20 14:01:07.318759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.318872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.318885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:40:59.740 [2024-11-20 14:01:07.318893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:40:59.740 [2024-11-20 14:01:07.318904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.337542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.337679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:40:59.740 [2024-11-20 14:01:07.337700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.651 ms 00:40:59.740 [2024-11-20 14:01:07.337709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.357205] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:40:59.740 [2024-11-20 14:01:07.357244] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:40:59.740 [2024-11-20 14:01:07.357259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.357268] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:40:59.740 [2024-11-20 14:01:07.357278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.448 ms 00:40:59.740 [2024-11-20 14:01:07.357285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.386669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.386729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:40:59.740 [2024-11-20 14:01:07.386743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.396 ms 00:40:59.740 [2024-11-20 14:01:07.386751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.405248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.405290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:40:59.740 [2024-11-20 14:01:07.405301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.473 ms 00:40:59.740 [2024-11-20 14:01:07.405309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.423143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.423181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:40:59.740 [2024-11-20 14:01:07.423192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.829 ms 00:40:59.740 [2024-11-20 14:01:07.423199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:40:59.740 [2024-11-20 14:01:07.424012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:40:59.740 [2024-11-20 14:01:07.424046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:40:59.740 [2024-11-20 14:01:07.424055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.704 ms 00:40:59.740 [2024-11-20 14:01:07.424066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.511255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.511323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:00.000 [2024-11-20 14:01:07.511343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.335 ms 00:41:00.000 [2024-11-20 14:01:07.511351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.522564] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:00.000 [2024-11-20 14:01:07.525689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.525735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:00.000 [2024-11-20 14:01:07.525748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.287 ms 00:41:00.000 [2024-11-20 14:01:07.525756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.525859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.525871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:00.000 [2024-11-20 14:01:07.525879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:41:00.000 [2024-11-20 14:01:07.525890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.525979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.525990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:00.000 [2024-11-20 14:01:07.525999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:41:00.000 [2024-11-20 14:01:07.526006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.526024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.526033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:00.000 [2024-11-20 14:01:07.526041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:41:00.000 [2024-11-20 14:01:07.526048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.526084] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:00.000 [2024-11-20 14:01:07.526102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.526110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:00.000 [2024-11-20 14:01:07.526118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:41:00.000 [2024-11-20 14:01:07.526126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.562762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.562805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:00.000 [2024-11-20 14:01:07.562818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.686 ms 00:41:00.000 [2024-11-20 14:01:07.562831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.562908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:00.000 [2024-11-20 14:01:07.562917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:00.000 [2024-11-20 14:01:07.562926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:41:00.000 [2024-11-20 14:01:07.562933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:00.000 [2024-11-20 14:01:07.564143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 374.496 ms, result 0 00:41:01.376  [2024-11-20T14:01:10.031Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-20T14:01:10.975Z] Copying: 65/1024 [MB] (33 MBps) [2024-11-20T14:01:11.927Z] Copying: 100/1024 [MB] (34 MBps) [2024-11-20T14:01:12.864Z] Copying: 136/1024 [MB] (36 MBps) [2024-11-20T14:01:13.801Z] Copying: 176/1024 [MB] (39 MBps) [2024-11-20T14:01:14.737Z] Copying: 210/1024 [MB] (34 MBps) [2024-11-20T14:01:16.113Z] Copying: 246/1024 [MB] (35 MBps) [2024-11-20T14:01:17.049Z] Copying: 279/1024 [MB] (33 MBps) [2024-11-20T14:01:17.985Z] Copying: 314/1024 [MB] (34 MBps) [2024-11-20T14:01:18.924Z] Copying: 352/1024 [MB] (38 MBps) [2024-11-20T14:01:19.861Z] Copying: 385/1024 [MB] (33 MBps) [2024-11-20T14:01:20.800Z] Copying: 418/1024 [MB] (33 MBps) [2024-11-20T14:01:21.744Z] Copying: 449/1024 [MB] (30 MBps) [2024-11-20T14:01:23.138Z] Copying: 484/1024 [MB] (35 MBps) [2024-11-20T14:01:23.825Z] Copying: 521/1024 [MB] (36 MBps) [2024-11-20T14:01:24.763Z] Copying: 555/1024 [MB] (34 MBps) [2024-11-20T14:01:25.698Z] Copying: 589/1024 [MB] (33 MBps) [2024-11-20T14:01:27.076Z] Copying: 623/1024 [MB] (33 MBps) [2024-11-20T14:01:28.013Z] Copying: 656/1024 [MB] (33 MBps) [2024-11-20T14:01:28.948Z] Copying: 690/1024 [MB] (33 MBps) [2024-11-20T14:01:29.886Z] Copying: 725/1024 [MB] (34 MBps) [2024-11-20T14:01:30.823Z] Copying: 756/1024 [MB] (31 MBps) [2024-11-20T14:01:31.761Z] Copying: 790/1024 [MB] (34 MBps) [2024-11-20T14:01:32.700Z] Copying: 829/1024 [MB] (38 MBps) [2024-11-20T14:01:34.119Z] Copying: 864/1024 [MB] (34 MBps) [2024-11-20T14:01:34.685Z] Copying: 898/1024 [MB] (33 MBps) [2024-11-20T14:01:36.062Z] Copying: 932/1024 [MB] (34 MBps) [2024-11-20T14:01:36.999Z] Copying: 965/1024 [MB] (33 MBps) [2024-11-20T14:01:37.567Z] Copying: 999/1024 [MB] (33 MBps) [2024-11-20T14:01:38.942Z] Copying: 1024/1024 [MB] (average 34 MBps)[2024-11-20 14:01:38.876209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.876560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:41:31.223 [2024-11-20 14:01:38.876588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:31.223 [2024-11-20 14:01:38.876600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.223 [2024-11-20 14:01:38.876632] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:41:31.223 [2024-11-20 14:01:38.882890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.882996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:41:31.223 [2024-11-20 14:01:38.883025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.247 ms 00:41:31.223 [2024-11-20 14:01:38.883036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.223 [2024-11-20 14:01:38.883493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.883513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:41:31.223 [2024-11-20 14:01:38.883525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:41:31.223 [2024-11-20 14:01:38.883534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.223 [2024-11-20 14:01:38.887254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.887276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:41:31.223 [2024-11-20 14:01:38.887286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.709 ms 00:41:31.223 [2024-11-20 14:01:38.887294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.223 [2024-11-20 14:01:38.892468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.892502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:41:31.223 [2024-11-20 14:01:38.892512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.142 ms 00:41:31.223 [2024-11-20 14:01:38.892519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.223 [2024-11-20 14:01:38.931703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.223 [2024-11-20 14:01:38.931761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:41:31.223 [2024-11-20 14:01:38.931775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.135 ms 00:41:31.223 [2024-11-20 14:01:38.931783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:38.954130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:38.954253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:41:31.482 [2024-11-20 14:01:38.954272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.338 ms 00:41:31.482 [2024-11-20 14:01:38.954281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:38.954418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:38.954437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:41:31.482 [2024-11-20 14:01:38.954447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:41:31.482 [2024-11-20 14:01:38.954454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:38.991951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:38.992001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:41:31.482 [2024-11-20 14:01:38.992014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.553 ms 00:41:31.482 [2024-11-20 14:01:38.992023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:39.028938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:39.029012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:41:31.482 [2024-11-20 14:01:39.029026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.943 ms 00:41:31.482 [2024-11-20 14:01:39.029049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:39.066426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:39.066565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:41:31.482 [2024-11-20 14:01:39.066582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.406 ms 00:41:31.482 [2024-11-20 14:01:39.066590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:39.103256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.482 [2024-11-20 14:01:39.103314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:41:31.482 [2024-11-20 14:01:39.103328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.635 ms 00:41:31.482 [2024-11-20 14:01:39.103336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.482 [2024-11-20 14:01:39.103385] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:41:31.482 [2024-11-20 14:01:39.103402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:41:31.482 [2024-11-20 14:01:39.103891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.103992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:41:31.483 [2024-11-20 14:01:39.104219] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:41:31.483 [2024-11-20 14:01:39.104231] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fa4affe-81c0-4583-92ea-789529719ac0 00:41:31.483 [2024-11-20 14:01:39.104239] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:41:31.483 [2024-11-20 14:01:39.104246] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:41:31.483 [2024-11-20 14:01:39.104254] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:41:31.483 [2024-11-20 14:01:39.104263] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:41:31.483 [2024-11-20 14:01:39.104270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:41:31.483 [2024-11-20 14:01:39.104278] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:41:31.483 [2024-11-20 14:01:39.104315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:41:31.483 [2024-11-20 14:01:39.104324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:41:31.483 [2024-11-20 14:01:39.104331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:41:31.483 [2024-11-20 14:01:39.104339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.483 [2024-11-20 14:01:39.104356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:41:31.483 [2024-11-20 14:01:39.104366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:41:31.483 [2024-11-20 14:01:39.104374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.125804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.483 [2024-11-20 14:01:39.125857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:41:31.483 [2024-11-20 14:01:39.125870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.423 ms 00:41:31.483 [2024-11-20 14:01:39.125895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.126531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:31.483 [2024-11-20 14:01:39.126545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:41:31.483 [2024-11-20 14:01:39.126553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:41:31.483 [2024-11-20 14:01:39.126566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.180816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.483 [2024-11-20 14:01:39.181009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:31.483 [2024-11-20 14:01:39.181027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.483 [2024-11-20 14:01:39.181037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.181119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.483 [2024-11-20 14:01:39.181127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:31.483 [2024-11-20 14:01:39.181135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.483 [2024-11-20 14:01:39.181149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.181230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.483 [2024-11-20 14:01:39.181243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:31.483 [2024-11-20 14:01:39.181251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.483 [2024-11-20 14:01:39.181258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.483 [2024-11-20 14:01:39.181275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.483 [2024-11-20 14:01:39.181283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:31.483 [2024-11-20 14:01:39.181290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.483 [2024-11-20 14:01:39.181298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.313816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.313900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:31.743 [2024-11-20 14:01:39.313915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.313924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.416318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.416391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:31.743 [2024-11-20 14:01:39.416405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.416434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.416538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.416548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:31.743 [2024-11-20 14:01:39.416556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.416564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.416603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.416612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:31.743 [2024-11-20 14:01:39.416619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.416626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.416867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.416915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:31.743 [2024-11-20 14:01:39.416936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.416955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.417013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.417037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:41:31.743 [2024-11-20 14:01:39.417086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.417151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.417215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.417246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:31.743 [2024-11-20 14:01:39.417274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.417299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.417369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:41:31.743 [2024-11-20 14:01:39.417399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:31.743 [2024-11-20 14:01:39.417429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:41:31.743 [2024-11-20 14:01:39.417455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:31.743 [2024-11-20 14:01:39.417639] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 542.445 ms, result 0 00:41:32.701 00:41:32.701 00:41:32.960 14:01:40 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:41:34.863 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:41:34.863 14:01:42 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:41:34.863 [2024-11-20 14:01:42.240826] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:41:34.863 [2024-11-20 14:01:42.240952] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80950 ] 00:41:34.863 [2024-11-20 14:01:42.414243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:34.863 [2024-11-20 14:01:42.529763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:35.433 [2024-11-20 14:01:42.896452] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:35.433 [2024-11-20 14:01:42.896611] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:41:35.433 [2024-11-20 14:01:43.053081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.053226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:41:35.433 [2024-11-20 14:01:43.053247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:35.433 [2024-11-20 14:01:43.053256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.053308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.053318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:41:35.433 [2024-11-20 14:01:43.053328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:41:35.433 [2024-11-20 14:01:43.053335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.053354] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:41:35.433 [2024-11-20 14:01:43.054279] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:41:35.433 [2024-11-20 14:01:43.054301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.054309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:41:35.433 [2024-11-20 14:01:43.054317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:41:35.433 [2024-11-20 14:01:43.054324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.055732] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:41:35.433 [2024-11-20 14:01:43.074330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.074373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:41:35.433 [2024-11-20 14:01:43.074386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.653 ms 00:41:35.433 [2024-11-20 14:01:43.074394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.074473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.074483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:41:35.433 [2024-11-20 14:01:43.074492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:41:35.433 [2024-11-20 14:01:43.074499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.081516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.081552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:41:35.433 [2024-11-20 14:01:43.081562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.956 ms 00:41:35.433 [2024-11-20 14:01:43.081573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.081668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.081683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:41:35.433 [2024-11-20 14:01:43.081693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:41:35.433 [2024-11-20 14:01:43.081701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.081764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.081775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:41:35.433 [2024-11-20 14:01:43.081784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:41:35.433 [2024-11-20 14:01:43.081792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.081822] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:41:35.433 [2024-11-20 14:01:43.086490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.086521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:41:35.433 [2024-11-20 14:01:43.086531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:41:35.433 [2024-11-20 14:01:43.086541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.086570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.086579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:41:35.433 [2024-11-20 14:01:43.086587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:41:35.433 [2024-11-20 14:01:43.086594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.086642] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:41:35.433 [2024-11-20 14:01:43.086663] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:41:35.433 [2024-11-20 14:01:43.086696] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:41:35.433 [2024-11-20 14:01:43.086730] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:41:35.433 [2024-11-20 14:01:43.086816] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:41:35.433 [2024-11-20 14:01:43.086827] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:41:35.433 [2024-11-20 14:01:43.086837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:41:35.433 [2024-11-20 14:01:43.086848] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:41:35.433 [2024-11-20 14:01:43.086856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:41:35.433 [2024-11-20 14:01:43.086864] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:41:35.433 [2024-11-20 14:01:43.086872] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:41:35.433 [2024-11-20 14:01:43.086881] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:41:35.433 [2024-11-20 14:01:43.086891] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:41:35.433 [2024-11-20 14:01:43.086899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.086907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:41:35.433 [2024-11-20 14:01:43.086915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:41:35.433 [2024-11-20 14:01:43.086924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.086991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.433 [2024-11-20 14:01:43.086999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:41:35.433 [2024-11-20 14:01:43.087007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:41:35.433 [2024-11-20 14:01:43.087014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.433 [2024-11-20 14:01:43.087106] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:41:35.433 [2024-11-20 14:01:43.087122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:41:35.433 [2024-11-20 14:01:43.087131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:35.433 [2024-11-20 14:01:43.087142] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:41:35.433 [2024-11-20 14:01:43.087159] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:41:35.433 [2024-11-20 14:01:43.087173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:41:35.433 [2024-11-20 14:01:43.087181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:35.433 [2024-11-20 14:01:43.087194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:41:35.433 [2024-11-20 14:01:43.087201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:41:35.433 [2024-11-20 14:01:43.087208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:41:35.433 [2024-11-20 14:01:43.087215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:41:35.433 [2024-11-20 14:01:43.087221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:41:35.433 [2024-11-20 14:01:43.087236] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087243] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:41:35.433 [2024-11-20 14:01:43.087249] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:41:35.433 [2024-11-20 14:01:43.087256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:41:35.433 [2024-11-20 14:01:43.087268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:41:35.433 [2024-11-20 14:01:43.087274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:35.433 [2024-11-20 14:01:43.087281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:41:35.434 [2024-11-20 14:01:43.087288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:35.434 [2024-11-20 14:01:43.087300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:41:35.434 [2024-11-20 14:01:43.087306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:35.434 [2024-11-20 14:01:43.087319] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:41:35.434 [2024-11-20 14:01:43.087325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:41:35.434 [2024-11-20 14:01:43.087338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:41:35.434 [2024-11-20 14:01:43.087344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:35.434 [2024-11-20 14:01:43.087357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:41:35.434 [2024-11-20 14:01:43.087365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:41:35.434 [2024-11-20 14:01:43.087371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:41:35.434 [2024-11-20 14:01:43.087377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:41:35.434 [2024-11-20 14:01:43.087383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:41:35.434 [2024-11-20 14:01:43.087389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:41:35.434 [2024-11-20 14:01:43.087402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:41:35.434 [2024-11-20 14:01:43.087409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087415] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:41:35.434 [2024-11-20 14:01:43.087423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:41:35.434 [2024-11-20 14:01:43.087430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:41:35.434 [2024-11-20 14:01:43.087437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:41:35.434 [2024-11-20 14:01:43.087446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:41:35.434 [2024-11-20 14:01:43.087453] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:41:35.434 [2024-11-20 14:01:43.087459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:41:35.434 [2024-11-20 14:01:43.087465] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:41:35.434 [2024-11-20 14:01:43.087471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:41:35.434 [2024-11-20 14:01:43.087477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:41:35.434 [2024-11-20 14:01:43.087484] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:41:35.434 [2024-11-20 14:01:43.087494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:41:35.434 [2024-11-20 14:01:43.087510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:41:35.434 [2024-11-20 14:01:43.087518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:41:35.434 [2024-11-20 14:01:43.087525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:41:35.434 [2024-11-20 14:01:43.087533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:41:35.434 [2024-11-20 14:01:43.087556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:41:35.434 [2024-11-20 14:01:43.087564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:41:35.434 [2024-11-20 14:01:43.087571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:41:35.434 [2024-11-20 14:01:43.087578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:41:35.434 [2024-11-20 14:01:43.087586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:41:35.434 [2024-11-20 14:01:43.087620] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:41:35.434 [2024-11-20 14:01:43.087632] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:41:35.434 [2024-11-20 14:01:43.087646] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:41:35.434 [2024-11-20 14:01:43.087653] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:41:35.434 [2024-11-20 14:01:43.087660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:41:35.434 [2024-11-20 14:01:43.087675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.434 [2024-11-20 14:01:43.087683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:41:35.434 [2024-11-20 14:01:43.087690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:41:35.434 [2024-11-20 14:01:43.087697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.434 [2024-11-20 14:01:43.125736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.434 [2024-11-20 14:01:43.125792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:41:35.434 [2024-11-20 14:01:43.125806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.038 ms 00:41:35.434 [2024-11-20 14:01:43.125814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.434 [2024-11-20 14:01:43.125913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.434 [2024-11-20 14:01:43.125923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:41:35.434 [2024-11-20 14:01:43.125931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:41:35.434 [2024-11-20 14:01:43.125939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.693 [2024-11-20 14:01:43.181873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.181924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:41:35.694 [2024-11-20 14:01:43.181937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.967 ms 00:41:35.694 [2024-11-20 14:01:43.181945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.182003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.182012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:41:35.694 [2024-11-20 14:01:43.182023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:41:35.694 [2024-11-20 14:01:43.182031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.182513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.182525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:41:35.694 [2024-11-20 14:01:43.182534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.408 ms 00:41:35.694 [2024-11-20 14:01:43.182541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.182649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.182662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:41:35.694 [2024-11-20 14:01:43.182670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:41:35.694 [2024-11-20 14:01:43.182681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.200982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.201123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:41:35.694 [2024-11-20 14:01:43.201143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.317 ms 00:41:35.694 [2024-11-20 14:01:43.201151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.219375] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:41:35.694 [2024-11-20 14:01:43.219432] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:41:35.694 [2024-11-20 14:01:43.219446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.219455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:41:35.694 [2024-11-20 14:01:43.219464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.206 ms 00:41:35.694 [2024-11-20 14:01:43.219471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.249715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.249773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:41:35.694 [2024-11-20 14:01:43.249787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.256 ms 00:41:35.694 [2024-11-20 14:01:43.249796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.268936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.268989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:41:35.694 [2024-11-20 14:01:43.269002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.114 ms 00:41:35.694 [2024-11-20 14:01:43.269010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.287764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.287816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:41:35.694 [2024-11-20 14:01:43.287828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.742 ms 00:41:35.694 [2024-11-20 14:01:43.287837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.288566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.288593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:41:35.694 [2024-11-20 14:01:43.288604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.595 ms 00:41:35.694 [2024-11-20 14:01:43.288615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.375191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.375261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:41:35.694 [2024-11-20 14:01:43.375282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.721 ms 00:41:35.694 [2024-11-20 14:01:43.375291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.387086] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:41:35.694 [2024-11-20 14:01:43.390348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.390488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:41:35.694 [2024-11-20 14:01:43.390506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.003 ms 00:41:35.694 [2024-11-20 14:01:43.390516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.390637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.390650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:41:35.694 [2024-11-20 14:01:43.390659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:41:35.694 [2024-11-20 14:01:43.390671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.390830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.390851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:41:35.694 [2024-11-20 14:01:43.390861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:41:35.694 [2024-11-20 14:01:43.390870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.390899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.390909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:41:35.694 [2024-11-20 14:01:43.390917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:41:35.694 [2024-11-20 14:01:43.390924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.694 [2024-11-20 14:01:43.390976] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:41:35.694 [2024-11-20 14:01:43.390987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.694 [2024-11-20 14:01:43.390994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:41:35.694 [2024-11-20 14:01:43.391002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:41:35.694 [2024-11-20 14:01:43.391009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.952 [2024-11-20 14:01:43.428072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.952 [2024-11-20 14:01:43.428138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:41:35.952 [2024-11-20 14:01:43.428153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.116 ms 00:41:35.952 [2024-11-20 14:01:43.428168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.952 [2024-11-20 14:01:43.428252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:41:35.952 [2024-11-20 14:01:43.428264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:41:35.952 [2024-11-20 14:01:43.428273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:41:35.952 [2024-11-20 14:01:43.428281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:41:35.952 [2024-11-20 14:01:43.429435] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.616 ms, result 0 00:41:36.888  [2024-11-20T14:01:45.543Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-20T14:01:46.486Z] Copying: 59/1024 [MB] (30 MBps) [2024-11-20T14:01:47.448Z] Copying: 91/1024 [MB] (32 MBps) [2024-11-20T14:01:48.828Z] Copying: 121/1024 [MB] (30 MBps) [2024-11-20T14:01:49.766Z] Copying: 152/1024 [MB] (30 MBps) [2024-11-20T14:01:50.703Z] Copying: 183/1024 [MB] (30 MBps) [2024-11-20T14:01:51.642Z] Copying: 213/1024 [MB] (30 MBps) [2024-11-20T14:01:52.579Z] Copying: 243/1024 [MB] (29 MBps) [2024-11-20T14:01:53.516Z] Copying: 273/1024 [MB] (29 MBps) [2024-11-20T14:01:54.455Z] Copying: 303/1024 [MB] (30 MBps) [2024-11-20T14:01:55.837Z] Copying: 332/1024 [MB] (29 MBps) [2024-11-20T14:01:56.487Z] Copying: 362/1024 [MB] (29 MBps) [2024-11-20T14:01:57.426Z] Copying: 392/1024 [MB] (29 MBps) [2024-11-20T14:01:58.802Z] Copying: 421/1024 [MB] (29 MBps) [2024-11-20T14:01:59.756Z] Copying: 451/1024 [MB] (29 MBps) [2024-11-20T14:02:00.693Z] Copying: 481/1024 [MB] (29 MBps) [2024-11-20T14:02:01.631Z] Copying: 510/1024 [MB] (29 MBps) [2024-11-20T14:02:02.621Z] Copying: 540/1024 [MB] (29 MBps) [2024-11-20T14:02:03.559Z] Copying: 569/1024 [MB] (29 MBps) [2024-11-20T14:02:04.496Z] Copying: 599/1024 [MB] (29 MBps) [2024-11-20T14:02:05.435Z] Copying: 627/1024 [MB] (28 MBps) [2024-11-20T14:02:06.815Z] Copying: 655/1024 [MB] (27 MBps) [2024-11-20T14:02:07.754Z] Copying: 681/1024 [MB] (26 MBps) [2024-11-20T14:02:08.692Z] Copying: 709/1024 [MB] (27 MBps) [2024-11-20T14:02:09.626Z] Copying: 736/1024 [MB] (26 MBps) [2024-11-20T14:02:10.561Z] Copying: 762/1024 [MB] (26 MBps) [2024-11-20T14:02:11.499Z] Copying: 789/1024 [MB] (26 MBps) [2024-11-20T14:02:12.437Z] Copying: 816/1024 [MB] (27 MBps) [2024-11-20T14:02:13.409Z] Copying: 844/1024 [MB] (27 MBps) [2024-11-20T14:02:14.788Z] Copying: 871/1024 [MB] (26 MBps) [2024-11-20T14:02:15.726Z] Copying: 897/1024 [MB] (26 MBps) [2024-11-20T14:02:16.665Z] Copying: 925/1024 [MB] (27 MBps) [2024-11-20T14:02:17.602Z] Copying: 951/1024 [MB] (26 MBps) [2024-11-20T14:02:18.540Z] Copying: 977/1024 [MB] (26 MBps) [2024-11-20T14:02:19.474Z] Copying: 1003/1024 [MB] (26 MBps) [2024-11-20T14:02:20.043Z] Copying: 1023/1024 [MB] (19 MBps) [2024-11-20T14:02:20.043Z] Copying: 1024/1024 [MB] (average 28 MBps)[2024-11-20 14:02:19.897776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.897909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:12.324 [2024-11-20 14:02:19.897949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:12.324 [2024-11-20 14:02:19.897971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:19.900381] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:12.324 [2024-11-20 14:02:19.907082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.907120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:12.324 [2024-11-20 14:02:19.907135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.660 ms 00:42:12.324 [2024-11-20 14:02:19.907161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:19.917271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.917377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:12.324 [2024-11-20 14:02:19.917395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.610 ms 00:42:12.324 [2024-11-20 14:02:19.917414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:19.941959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.942003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:12.324 [2024-11-20 14:02:19.942017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.571 ms 00:42:12.324 [2024-11-20 14:02:19.942028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:19.947356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.947392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:12.324 [2024-11-20 14:02:19.947403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.303 ms 00:42:12.324 [2024-11-20 14:02:19.947429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:19.987401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:19.987449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:12.324 [2024-11-20 14:02:19.987464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.982 ms 00:42:12.324 [2024-11-20 14:02:19.987473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.324 [2024-11-20 14:02:20.009337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.324 [2024-11-20 14:02:20.009435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:12.324 [2024-11-20 14:02:20.009454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.861 ms 00:42:12.325 [2024-11-20 14:02:20.009481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.585 [2024-11-20 14:02:20.120859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.585 [2024-11-20 14:02:20.120927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:12.585 [2024-11-20 14:02:20.120954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 111.562 ms 00:42:12.585 [2024-11-20 14:02:20.120981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.585 [2024-11-20 14:02:20.157495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.585 [2024-11-20 14:02:20.157540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:12.585 [2024-11-20 14:02:20.157554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.563 ms 00:42:12.585 [2024-11-20 14:02:20.157580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.585 [2024-11-20 14:02:20.192094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.585 [2024-11-20 14:02:20.192160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:12.586 [2024-11-20 14:02:20.192173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.539 ms 00:42:12.586 [2024-11-20 14:02:20.192199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.586 [2024-11-20 14:02:20.227733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.586 [2024-11-20 14:02:20.227779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:12.586 [2024-11-20 14:02:20.227793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.561 ms 00:42:12.586 [2024-11-20 14:02:20.227802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.586 [2024-11-20 14:02:20.263370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.586 [2024-11-20 14:02:20.263413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:12.586 [2024-11-20 14:02:20.263426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.553 ms 00:42:12.586 [2024-11-20 14:02:20.263435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.586 [2024-11-20 14:02:20.263475] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:12.586 [2024-11-20 14:02:20.263492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 112128 / 261120 wr_cnt: 1 state: open 00:42:12.586 [2024-11-20 14:02:20.263504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.263994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:12.586 [2024-11-20 14:02:20.264184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:12.587 [2024-11-20 14:02:20.264511] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:12.587 [2024-11-20 14:02:20.264520] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fa4affe-81c0-4583-92ea-789529719ac0 00:42:12.587 [2024-11-20 14:02:20.264531] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 112128 00:42:12.587 [2024-11-20 14:02:20.264540] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 113088 00:42:12.587 [2024-11-20 14:02:20.264549] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 112128 00:42:12.587 [2024-11-20 14:02:20.264559] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0086 00:42:12.587 [2024-11-20 14:02:20.264570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:12.587 [2024-11-20 14:02:20.264586] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:12.587 [2024-11-20 14:02:20.264610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:12.587 [2024-11-20 14:02:20.264620] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:12.587 [2024-11-20 14:02:20.264628] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:12.587 [2024-11-20 14:02:20.264639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.587 [2024-11-20 14:02:20.264648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:12.587 [2024-11-20 14:02:20.264658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.167 ms 00:42:12.587 [2024-11-20 14:02:20.264668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.587 [2024-11-20 14:02:20.285439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.587 [2024-11-20 14:02:20.285476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:12.587 [2024-11-20 14:02:20.285488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.775 ms 00:42:12.587 [2024-11-20 14:02:20.285505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.587 [2024-11-20 14:02:20.286132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:12.587 [2024-11-20 14:02:20.286154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:12.587 [2024-11-20 14:02:20.286165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:42:12.587 [2024-11-20 14:02:20.286175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.846 [2024-11-20 14:02:20.339962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:12.846 [2024-11-20 14:02:20.340020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:12.846 [2024-11-20 14:02:20.340033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:12.846 [2024-11-20 14:02:20.340043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.846 [2024-11-20 14:02:20.340117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:12.846 [2024-11-20 14:02:20.340127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:12.846 [2024-11-20 14:02:20.340138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:12.846 [2024-11-20 14:02:20.340147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.846 [2024-11-20 14:02:20.340225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:12.846 [2024-11-20 14:02:20.340239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:12.846 [2024-11-20 14:02:20.340254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:12.846 [2024-11-20 14:02:20.340263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.846 [2024-11-20 14:02:20.340283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:12.846 [2024-11-20 14:02:20.340294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:12.846 [2024-11-20 14:02:20.340303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:12.846 [2024-11-20 14:02:20.340313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:12.846 [2024-11-20 14:02:20.474883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:12.846 [2024-11-20 14:02:20.474978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:12.846 [2024-11-20 14:02:20.475003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:12.846 [2024-11-20 14:02:20.475012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.110 [2024-11-20 14:02:20.585073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.110 [2024-11-20 14:02:20.585164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:13.110 [2024-11-20 14:02:20.585181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.110 [2024-11-20 14:02:20.585192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.110 [2024-11-20 14:02:20.585315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.110 [2024-11-20 14:02:20.585327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:13.110 [2024-11-20 14:02:20.585338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.110 [2024-11-20 14:02:20.585356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.110 [2024-11-20 14:02:20.585404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.110 [2024-11-20 14:02:20.585415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:13.110 [2024-11-20 14:02:20.585425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.110 [2024-11-20 14:02:20.585435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.110 [2024-11-20 14:02:20.585566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.110 [2024-11-20 14:02:20.585582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:13.110 [2024-11-20 14:02:20.585593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.111 [2024-11-20 14:02:20.585602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.111 [2024-11-20 14:02:20.585653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.111 [2024-11-20 14:02:20.585665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:13.111 [2024-11-20 14:02:20.585675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.111 [2024-11-20 14:02:20.585685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.111 [2024-11-20 14:02:20.585768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.111 [2024-11-20 14:02:20.585799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:13.111 [2024-11-20 14:02:20.585809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.111 [2024-11-20 14:02:20.585819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.111 [2024-11-20 14:02:20.585897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:13.111 [2024-11-20 14:02:20.585909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:13.111 [2024-11-20 14:02:20.585918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:13.111 [2024-11-20 14:02:20.585929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:13.111 [2024-11-20 14:02:20.586083] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 691.190 ms, result 0 00:42:14.496 00:42:14.496 00:42:14.756 14:02:22 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:42:14.756 [2024-11-20 14:02:22.303684] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:42:14.756 [2024-11-20 14:02:22.303853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81357 ] 00:42:15.016 [2024-11-20 14:02:22.481134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:15.016 [2024-11-20 14:02:22.615863] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:15.588 [2024-11-20 14:02:23.022104] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:15.588 [2024-11-20 14:02:23.022319] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:42:15.588 [2024-11-20 14:02:23.183626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.183708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:15.588 [2024-11-20 14:02:23.183786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:15.588 [2024-11-20 14:02:23.183797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.183855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.183882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:15.588 [2024-11-20 14:02:23.183896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:42:15.588 [2024-11-20 14:02:23.183905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.183927] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:15.588 [2024-11-20 14:02:23.184848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:15.588 [2024-11-20 14:02:23.184875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.184885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:15.588 [2024-11-20 14:02:23.184895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.954 ms 00:42:15.588 [2024-11-20 14:02:23.184904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.187387] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:42:15.588 [2024-11-20 14:02:23.206152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.206195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:42:15.588 [2024-11-20 14:02:23.206210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:42:15.588 [2024-11-20 14:02:23.206236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.206316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.206328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:42:15.588 [2024-11-20 14:02:23.206338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:42:15.588 [2024-11-20 14:02:23.206348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.218950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.218989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:15.588 [2024-11-20 14:02:23.219003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.549 ms 00:42:15.588 [2024-11-20 14:02:23.219036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.219134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.219151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:15.588 [2024-11-20 14:02:23.219161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:42:15.588 [2024-11-20 14:02:23.219171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.219240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.219251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:15.588 [2024-11-20 14:02:23.219261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:15.588 [2024-11-20 14:02:23.219271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.219306] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:15.588 [2024-11-20 14:02:23.224987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.225101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:15.588 [2024-11-20 14:02:23.225118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.705 ms 00:42:15.588 [2024-11-20 14:02:23.225133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.225169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.588 [2024-11-20 14:02:23.225179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:15.588 [2024-11-20 14:02:23.225190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:42:15.588 [2024-11-20 14:02:23.225215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.588 [2024-11-20 14:02:23.225256] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:42:15.588 [2024-11-20 14:02:23.225282] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:42:15.588 [2024-11-20 14:02:23.225319] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:42:15.588 [2024-11-20 14:02:23.225342] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:42:15.589 [2024-11-20 14:02:23.225431] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:15.589 [2024-11-20 14:02:23.225444] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:15.589 [2024-11-20 14:02:23.225456] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:15.589 [2024-11-20 14:02:23.225468] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:15.589 [2024-11-20 14:02:23.225478] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:15.589 [2024-11-20 14:02:23.225489] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:15.589 [2024-11-20 14:02:23.225499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:15.589 [2024-11-20 14:02:23.225508] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:15.589 [2024-11-20 14:02:23.225521] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:15.589 [2024-11-20 14:02:23.225532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.589 [2024-11-20 14:02:23.225541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:15.589 [2024-11-20 14:02:23.225550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:42:15.589 [2024-11-20 14:02:23.225559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.589 [2024-11-20 14:02:23.225630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.589 [2024-11-20 14:02:23.225640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:15.589 [2024-11-20 14:02:23.225651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:42:15.589 [2024-11-20 14:02:23.225660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.589 [2024-11-20 14:02:23.225787] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:15.589 [2024-11-20 14:02:23.225805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:15.589 [2024-11-20 14:02:23.225815] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:15.589 [2024-11-20 14:02:23.225825] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.225835] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:15.589 [2024-11-20 14:02:23.225844] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.225852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:15.589 [2024-11-20 14:02:23.225861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:15.589 [2024-11-20 14:02:23.225871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:15.589 [2024-11-20 14:02:23.225882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:15.589 [2024-11-20 14:02:23.225891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:15.589 [2024-11-20 14:02:23.225900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:15.589 [2024-11-20 14:02:23.225909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:15.589 [2024-11-20 14:02:23.225917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:15.589 [2024-11-20 14:02:23.225927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:15.589 [2024-11-20 14:02:23.225947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.225956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:15.589 [2024-11-20 14:02:23.225969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:15.589 [2024-11-20 14:02:23.225978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.225987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:15.589 [2024-11-20 14:02:23.225995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:15.589 [2024-11-20 14:02:23.226020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226036] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:15.589 [2024-11-20 14:02:23.226044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:15.589 [2024-11-20 14:02:23.226068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:15.589 [2024-11-20 14:02:23.226092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:15.589 [2024-11-20 14:02:23.226108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:15.589 [2024-11-20 14:02:23.226116] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:15.589 [2024-11-20 14:02:23.226124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:15.589 [2024-11-20 14:02:23.226132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:15.589 [2024-11-20 14:02:23.226140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:15.589 [2024-11-20 14:02:23.226149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226157] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:15.589 [2024-11-20 14:02:23.226165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:15.589 [2024-11-20 14:02:23.226173] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226181] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:15.589 [2024-11-20 14:02:23.226191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:15.589 [2024-11-20 14:02:23.226200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:15.589 [2024-11-20 14:02:23.226217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:15.589 [2024-11-20 14:02:23.226225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:15.589 [2024-11-20 14:02:23.226233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:15.589 [2024-11-20 14:02:23.226241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:15.589 [2024-11-20 14:02:23.226250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:15.589 [2024-11-20 14:02:23.226257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:15.589 [2024-11-20 14:02:23.226267] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:15.589 [2024-11-20 14:02:23.226278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:15.589 [2024-11-20 14:02:23.226297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:15.589 [2024-11-20 14:02:23.226305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:15.589 [2024-11-20 14:02:23.226314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:15.589 [2024-11-20 14:02:23.226323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:15.589 [2024-11-20 14:02:23.226332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:15.589 [2024-11-20 14:02:23.226340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:15.589 [2024-11-20 14:02:23.226349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:15.589 [2024-11-20 14:02:23.226357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:15.589 [2024-11-20 14:02:23.226366] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:15.589 [2024-11-20 14:02:23.226410] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:15.589 [2024-11-20 14:02:23.226425] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226435] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:15.589 [2024-11-20 14:02:23.226444] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:15.589 [2024-11-20 14:02:23.226453] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:15.589 [2024-11-20 14:02:23.226463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:15.589 [2024-11-20 14:02:23.226473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.589 [2024-11-20 14:02:23.226482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:15.589 [2024-11-20 14:02:23.226492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:42:15.589 [2024-11-20 14:02:23.226501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.589 [2024-11-20 14:02:23.273988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.589 [2024-11-20 14:02:23.274042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:15.590 [2024-11-20 14:02:23.274060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.520 ms 00:42:15.590 [2024-11-20 14:02:23.274070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.590 [2024-11-20 14:02:23.274188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.590 [2024-11-20 14:02:23.274199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:42:15.590 [2024-11-20 14:02:23.274210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:42:15.590 [2024-11-20 14:02:23.274219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.339471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.339619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:15.851 [2024-11-20 14:02:23.339639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.288 ms 00:42:15.851 [2024-11-20 14:02:23.339650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.339747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.339760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:15.851 [2024-11-20 14:02:23.339777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:42:15.851 [2024-11-20 14:02:23.339787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.340629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.340643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:15.851 [2024-11-20 14:02:23.340654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:42:15.851 [2024-11-20 14:02:23.340664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.340814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.340829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:15.851 [2024-11-20 14:02:23.340840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:42:15.851 [2024-11-20 14:02:23.340855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.363028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.363076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:15.851 [2024-11-20 14:02:23.363110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.187 ms 00:42:15.851 [2024-11-20 14:02:23.363120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.382878] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:42:15.851 [2024-11-20 14:02:23.382925] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:42:15.851 [2024-11-20 14:02:23.382941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.382951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:42:15.851 [2024-11-20 14:02:23.382964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.699 ms 00:42:15.851 [2024-11-20 14:02:23.382973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.413332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.413387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:42:15.851 [2024-11-20 14:02:23.413403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.357 ms 00:42:15.851 [2024-11-20 14:02:23.413413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.431608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.431669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:42:15.851 [2024-11-20 14:02:23.431683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.192 ms 00:42:15.851 [2024-11-20 14:02:23.431699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.449168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.449211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:42:15.851 [2024-11-20 14:02:23.449225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.428 ms 00:42:15.851 [2024-11-20 14:02:23.449235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.450111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.450145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:42:15.851 [2024-11-20 14:02:23.450158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:42:15.851 [2024-11-20 14:02:23.450172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.546534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.546627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:42:15.851 [2024-11-20 14:02:23.546669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.519 ms 00:42:15.851 [2024-11-20 14:02:23.546679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.557562] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:42:15.851 [2024-11-20 14:02:23.562468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.562504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:42:15.851 [2024-11-20 14:02:23.562520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.733 ms 00:42:15.851 [2024-11-20 14:02:23.562532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.851 [2024-11-20 14:02:23.562646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.851 [2024-11-20 14:02:23.562659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:42:15.852 [2024-11-20 14:02:23.562670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:42:15.852 [2024-11-20 14:02:23.562684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.852 [2024-11-20 14:02:23.564958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.852 [2024-11-20 14:02:23.565003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:42:15.852 [2024-11-20 14:02:23.565016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.197 ms 00:42:15.852 [2024-11-20 14:02:23.565026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.852 [2024-11-20 14:02:23.565074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.852 [2024-11-20 14:02:23.565086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:42:15.852 [2024-11-20 14:02:23.565096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:42:15.852 [2024-11-20 14:02:23.565106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:15.852 [2024-11-20 14:02:23.565158] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:42:15.852 [2024-11-20 14:02:23.565171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:15.852 [2024-11-20 14:02:23.565182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:42:15.852 [2024-11-20 14:02:23.565192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:42:15.852 [2024-11-20 14:02:23.565201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.112 [2024-11-20 14:02:23.602606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.112 [2024-11-20 14:02:23.602650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:42:16.112 [2024-11-20 14:02:23.602664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.453 ms 00:42:16.112 [2024-11-20 14:02:23.602681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.112 [2024-11-20 14:02:23.602779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:16.112 [2024-11-20 14:02:23.602792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:42:16.112 [2024-11-20 14:02:23.602803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:42:16.112 [2024-11-20 14:02:23.602812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:16.112 [2024-11-20 14:02:23.604333] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 420.993 ms, result 0 00:42:17.492  [2024-11-20T14:02:25.782Z] Copying: 26/1024 [MB] (26 MBps) [2024-11-20T14:02:27.163Z] Copying: 56/1024 [MB] (29 MBps) [2024-11-20T14:02:28.102Z] Copying: 87/1024 [MB] (31 MBps) [2024-11-20T14:02:29.042Z] Copying: 118/1024 [MB] (30 MBps) [2024-11-20T14:02:29.985Z] Copying: 148/1024 [MB] (29 MBps) [2024-11-20T14:02:30.923Z] Copying: 177/1024 [MB] (29 MBps) [2024-11-20T14:02:31.861Z] Copying: 208/1024 [MB] (31 MBps) [2024-11-20T14:02:32.802Z] Copying: 239/1024 [MB] (31 MBps) [2024-11-20T14:02:34.184Z] Copying: 270/1024 [MB] (31 MBps) [2024-11-20T14:02:35.124Z] Copying: 302/1024 [MB] (31 MBps) [2024-11-20T14:02:36.065Z] Copying: 335/1024 [MB] (32 MBps) [2024-11-20T14:02:37.004Z] Copying: 365/1024 [MB] (30 MBps) [2024-11-20T14:02:37.941Z] Copying: 396/1024 [MB] (30 MBps) [2024-11-20T14:02:38.878Z] Copying: 427/1024 [MB] (30 MBps) [2024-11-20T14:02:39.816Z] Copying: 457/1024 [MB] (30 MBps) [2024-11-20T14:02:40.754Z] Copying: 488/1024 [MB] (30 MBps) [2024-11-20T14:02:42.132Z] Copying: 518/1024 [MB] (30 MBps) [2024-11-20T14:02:43.066Z] Copying: 549/1024 [MB] (30 MBps) [2024-11-20T14:02:44.000Z] Copying: 579/1024 [MB] (29 MBps) [2024-11-20T14:02:44.938Z] Copying: 608/1024 [MB] (29 MBps) [2024-11-20T14:02:45.920Z] Copying: 638/1024 [MB] (29 MBps) [2024-11-20T14:02:46.859Z] Copying: 670/1024 [MB] (32 MBps) [2024-11-20T14:02:47.795Z] Copying: 702/1024 [MB] (32 MBps) [2024-11-20T14:02:48.733Z] Copying: 735/1024 [MB] (32 MBps) [2024-11-20T14:02:50.117Z] Copying: 766/1024 [MB] (31 MBps) [2024-11-20T14:02:51.056Z] Copying: 798/1024 [MB] (32 MBps) [2024-11-20T14:02:51.995Z] Copying: 832/1024 [MB] (33 MBps) [2024-11-20T14:02:52.931Z] Copying: 865/1024 [MB] (32 MBps) [2024-11-20T14:02:53.869Z] Copying: 896/1024 [MB] (31 MBps) [2024-11-20T14:02:54.808Z] Copying: 930/1024 [MB] (33 MBps) [2024-11-20T14:02:55.745Z] Copying: 962/1024 [MB] (32 MBps) [2024-11-20T14:02:56.682Z] Copying: 995/1024 [MB] (33 MBps) [2024-11-20T14:02:56.682Z] Copying: 1024/1024 [MB] (average 31 MBps)[2024-11-20 14:02:56.631697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:48.963 [2024-11-20 14:02:56.631958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:42:48.963 [2024-11-20 14:02:56.632121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:42:48.963 [2024-11-20 14:02:56.632248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:48.963 [2024-11-20 14:02:56.632341] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:42:48.963 [2024-11-20 14:02:56.643270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:48.963 [2024-11-20 14:02:56.643353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:42:48.963 [2024-11-20 14:02:56.643377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.909 ms 00:42:48.963 [2024-11-20 14:02:56.643390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:48.963 [2024-11-20 14:02:56.643779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:48.963 [2024-11-20 14:02:56.643801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:42:48.963 [2024-11-20 14:02:56.643814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.348 ms 00:42:48.963 [2024-11-20 14:02:56.643827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:48.963 [2024-11-20 14:02:56.650251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:48.963 [2024-11-20 14:02:56.650370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:42:48.963 [2024-11-20 14:02:56.650408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.406 ms 00:42:48.963 [2024-11-20 14:02:56.650421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:48.963 [2024-11-20 14:02:56.656887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:48.963 [2024-11-20 14:02:56.656978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:42:48.963 [2024-11-20 14:02:56.656994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.427 ms 00:42:48.963 [2024-11-20 14:02:56.657003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.694130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.694173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:42:49.222 [2024-11-20 14:02:56.694185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.116 ms 00:42:49.222 [2024-11-20 14:02:56.694192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.715466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.715511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:42:49.222 [2024-11-20 14:02:56.715524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.278 ms 00:42:49.222 [2024-11-20 14:02:56.715532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.819515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.819607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:42:49.222 [2024-11-20 14:02:56.819623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 104.137 ms 00:42:49.222 [2024-11-20 14:02:56.819633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.857595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.857640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:42:49.222 [2024-11-20 14:02:56.857652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.011 ms 00:42:49.222 [2024-11-20 14:02:56.857660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.892847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.892883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:42:49.222 [2024-11-20 14:02:56.892910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.218 ms 00:42:49.222 [2024-11-20 14:02:56.892917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.222 [2024-11-20 14:02:56.927397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.222 [2024-11-20 14:02:56.927432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:42:49.222 [2024-11-20 14:02:56.927444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.515 ms 00:42:49.222 [2024-11-20 14:02:56.927450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.482 [2024-11-20 14:02:56.962841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.482 [2024-11-20 14:02:56.962875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:42:49.482 [2024-11-20 14:02:56.962885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.390 ms 00:42:49.482 [2024-11-20 14:02:56.962893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.482 [2024-11-20 14:02:56.962923] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:42:49.482 [2024-11-20 14:02:56.962951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:42:49.482 [2024-11-20 14:02:56.962963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.962972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.962979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.962987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.962994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:42:49.482 [2024-11-20 14:02:56.963104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:42:49.483 [2024-11-20 14:02:56.963672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:42:49.483 [2024-11-20 14:02:56.963680] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9fa4affe-81c0-4583-92ea-789529719ac0 00:42:49.483 [2024-11-20 14:02:56.963688] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:42:49.483 [2024-11-20 14:02:56.963695] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 19904 00:42:49.483 [2024-11-20 14:02:56.963701] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 18944 00:42:49.483 [2024-11-20 14:02:56.963709] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0507 00:42:49.483 [2024-11-20 14:02:56.963756] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:42:49.483 [2024-11-20 14:02:56.963769] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:42:49.483 [2024-11-20 14:02:56.963776] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:42:49.483 [2024-11-20 14:02:56.963793] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:42:49.483 [2024-11-20 14:02:56.963799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:42:49.483 [2024-11-20 14:02:56.963806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.484 [2024-11-20 14:02:56.963814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:42:49.484 [2024-11-20 14:02:56.963822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.885 ms 00:42:49.484 [2024-11-20 14:02:56.963830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:56.982994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.484 [2024-11-20 14:02:56.983029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:42:49.484 [2024-11-20 14:02:56.983039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.170 ms 00:42:49.484 [2024-11-20 14:02:56.983051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:56.983541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:49.484 [2024-11-20 14:02:56.983551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:42:49.484 [2024-11-20 14:02:56.983560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.472 ms 00:42:49.484 [2024-11-20 14:02:56.983567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:57.032629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.484 [2024-11-20 14:02:57.032668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:42:49.484 [2024-11-20 14:02:57.032679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.484 [2024-11-20 14:02:57.032687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:57.032752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.484 [2024-11-20 14:02:57.032762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:42:49.484 [2024-11-20 14:02:57.032770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.484 [2024-11-20 14:02:57.032777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:57.032830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.484 [2024-11-20 14:02:57.032842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:42:49.484 [2024-11-20 14:02:57.032855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.484 [2024-11-20 14:02:57.032863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:57.032879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.484 [2024-11-20 14:02:57.032887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:42:49.484 [2024-11-20 14:02:57.032894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.484 [2024-11-20 14:02:57.032901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.484 [2024-11-20 14:02:57.154848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.484 [2024-11-20 14:02:57.154906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:42:49.484 [2024-11-20 14:02:57.154924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.484 [2024-11-20 14:02:57.154932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:42:49.743 [2024-11-20 14:02:57.255112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:49.743 [2024-11-20 14:02:57.255227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:49.743 [2024-11-20 14:02:57.255292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:49.743 [2024-11-20 14:02:57.255430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:42:49.743 [2024-11-20 14:02:57.255491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:49.743 [2024-11-20 14:02:57.255552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:42:49.743 [2024-11-20 14:02:57.255609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:49.743 [2024-11-20 14:02:57.255617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:42:49.743 [2024-11-20 14:02:57.255624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:49.743 [2024-11-20 14:02:57.255770] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 625.256 ms, result 0 00:42:51.123 00:42:51.123 00:42:51.123 14:02:58 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:53.029 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:42:53.029 Process with pid 79906 is not found 00:42:53.029 Remove shared memory files 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79906 00:42:53.029 14:03:00 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79906 ']' 00:42:53.029 14:03:00 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79906 00:42:53.029 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79906) - No such process 00:42:53.029 14:03:00 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79906 is not found' 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:42:53.029 14:03:00 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:42:53.029 ************************************ 00:42:53.029 END TEST ftl_restore 00:42:53.029 ************************************ 00:42:53.029 00:42:53.029 real 2m56.043s 00:42:53.029 user 2m43.069s 00:42:53.029 sys 0m13.784s 00:42:53.030 14:03:00 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:53.030 14:03:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:42:53.030 14:03:00 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:42:53.030 14:03:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:42:53.030 14:03:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:53.030 14:03:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:42:53.030 ************************************ 00:42:53.030 START TEST ftl_dirty_shutdown 00:42:53.030 ************************************ 00:42:53.030 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:42:53.030 * Looking for test storage... 00:42:53.030 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:42:53.030 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:53.030 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:42:53.030 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.291 --rc genhtml_branch_coverage=1 00:42:53.291 --rc genhtml_function_coverage=1 00:42:53.291 --rc genhtml_legend=1 00:42:53.291 --rc geninfo_all_blocks=1 00:42:53.291 --rc geninfo_unexecuted_blocks=1 00:42:53.291 00:42:53.291 ' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.291 --rc genhtml_branch_coverage=1 00:42:53.291 --rc genhtml_function_coverage=1 00:42:53.291 --rc genhtml_legend=1 00:42:53.291 --rc geninfo_all_blocks=1 00:42:53.291 --rc geninfo_unexecuted_blocks=1 00:42:53.291 00:42:53.291 ' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.291 --rc genhtml_branch_coverage=1 00:42:53.291 --rc genhtml_function_coverage=1 00:42:53.291 --rc genhtml_legend=1 00:42:53.291 --rc geninfo_all_blocks=1 00:42:53.291 --rc geninfo_unexecuted_blocks=1 00:42:53.291 00:42:53.291 ' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:53.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:53.291 --rc genhtml_branch_coverage=1 00:42:53.291 --rc genhtml_function_coverage=1 00:42:53.291 --rc genhtml_legend=1 00:42:53.291 --rc geninfo_all_blocks=1 00:42:53.291 --rc geninfo_unexecuted_blocks=1 00:42:53.291 00:42:53.291 ' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:42:53.291 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81808 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81808 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81808 ']' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:53.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:53.292 14:03:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:42:53.292 [2024-11-20 14:03:00.990611] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:42:53.292 [2024-11-20 14:03:00.991344] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81808 ] 00:42:53.553 [2024-11-20 14:03:01.170680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:53.813 [2024-11-20 14:03:01.293762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:42:54.751 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:42:55.011 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:55.271 { 00:42:55.271 "name": "nvme0n1", 00:42:55.271 "aliases": [ 00:42:55.271 "b6e78140-ab9b-4e1a-83ee-1f828003896c" 00:42:55.271 ], 00:42:55.271 "product_name": "NVMe disk", 00:42:55.271 "block_size": 4096, 00:42:55.271 "num_blocks": 1310720, 00:42:55.271 "uuid": "b6e78140-ab9b-4e1a-83ee-1f828003896c", 00:42:55.271 "numa_id": -1, 00:42:55.271 "assigned_rate_limits": { 00:42:55.271 "rw_ios_per_sec": 0, 00:42:55.271 "rw_mbytes_per_sec": 0, 00:42:55.271 "r_mbytes_per_sec": 0, 00:42:55.271 "w_mbytes_per_sec": 0 00:42:55.271 }, 00:42:55.271 "claimed": true, 00:42:55.271 "claim_type": "read_many_write_one", 00:42:55.271 "zoned": false, 00:42:55.271 "supported_io_types": { 00:42:55.271 "read": true, 00:42:55.271 "write": true, 00:42:55.271 "unmap": true, 00:42:55.271 "flush": true, 00:42:55.271 "reset": true, 00:42:55.271 "nvme_admin": true, 00:42:55.271 "nvme_io": true, 00:42:55.271 "nvme_io_md": false, 00:42:55.271 "write_zeroes": true, 00:42:55.271 "zcopy": false, 00:42:55.271 "get_zone_info": false, 00:42:55.271 "zone_management": false, 00:42:55.271 "zone_append": false, 00:42:55.271 "compare": true, 00:42:55.271 "compare_and_write": false, 00:42:55.271 "abort": true, 00:42:55.271 "seek_hole": false, 00:42:55.271 "seek_data": false, 00:42:55.271 "copy": true, 00:42:55.271 "nvme_iov_md": false 00:42:55.271 }, 00:42:55.271 "driver_specific": { 00:42:55.271 "nvme": [ 00:42:55.271 { 00:42:55.271 "pci_address": "0000:00:11.0", 00:42:55.271 "trid": { 00:42:55.271 "trtype": "PCIe", 00:42:55.271 "traddr": "0000:00:11.0" 00:42:55.271 }, 00:42:55.271 "ctrlr_data": { 00:42:55.271 "cntlid": 0, 00:42:55.271 "vendor_id": "0x1b36", 00:42:55.271 "model_number": "QEMU NVMe Ctrl", 00:42:55.271 "serial_number": "12341", 00:42:55.271 "firmware_revision": "8.0.0", 00:42:55.271 "subnqn": "nqn.2019-08.org.qemu:12341", 00:42:55.271 "oacs": { 00:42:55.271 "security": 0, 00:42:55.271 "format": 1, 00:42:55.271 "firmware": 0, 00:42:55.271 "ns_manage": 1 00:42:55.271 }, 00:42:55.271 "multi_ctrlr": false, 00:42:55.271 "ana_reporting": false 00:42:55.271 }, 00:42:55.271 "vs": { 00:42:55.271 "nvme_version": "1.4" 00:42:55.271 }, 00:42:55.271 "ns_data": { 00:42:55.271 "id": 1, 00:42:55.271 "can_share": false 00:42:55.271 } 00:42:55.271 } 00:42:55.271 ], 00:42:55.271 "mp_policy": "active_passive" 00:42:55.271 } 00:42:55.271 } 00:42:55.271 ]' 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:42:55.271 14:03:02 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:42:55.531 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=fb487434-25e5-4d32-bc66-1d993ea1f8c1 00:42:55.531 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:42:55.531 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb487434-25e5-4d32-bc66-1d993ea1f8c1 00:42:55.790 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=019b54ac-8804-4822-901c-1106fd630e39 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 019b54ac-8804-4822-901c-1106fd630e39 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:42:56.050 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.310 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:56.310 { 00:42:56.310 "name": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:56.310 "aliases": [ 00:42:56.310 "lvs/nvme0n1p0" 00:42:56.310 ], 00:42:56.310 "product_name": "Logical Volume", 00:42:56.310 "block_size": 4096, 00:42:56.310 "num_blocks": 26476544, 00:42:56.310 "uuid": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:56.310 "assigned_rate_limits": { 00:42:56.310 "rw_ios_per_sec": 0, 00:42:56.310 "rw_mbytes_per_sec": 0, 00:42:56.310 "r_mbytes_per_sec": 0, 00:42:56.310 "w_mbytes_per_sec": 0 00:42:56.310 }, 00:42:56.310 "claimed": false, 00:42:56.310 "zoned": false, 00:42:56.310 "supported_io_types": { 00:42:56.310 "read": true, 00:42:56.310 "write": true, 00:42:56.310 "unmap": true, 00:42:56.310 "flush": false, 00:42:56.310 "reset": true, 00:42:56.310 "nvme_admin": false, 00:42:56.310 "nvme_io": false, 00:42:56.310 "nvme_io_md": false, 00:42:56.310 "write_zeroes": true, 00:42:56.310 "zcopy": false, 00:42:56.310 "get_zone_info": false, 00:42:56.310 "zone_management": false, 00:42:56.310 "zone_append": false, 00:42:56.310 "compare": false, 00:42:56.310 "compare_and_write": false, 00:42:56.310 "abort": false, 00:42:56.310 "seek_hole": true, 00:42:56.310 "seek_data": true, 00:42:56.310 "copy": false, 00:42:56.310 "nvme_iov_md": false 00:42:56.310 }, 00:42:56.310 "driver_specific": { 00:42:56.310 "lvol": { 00:42:56.310 "lvol_store_uuid": "019b54ac-8804-4822-901c-1106fd630e39", 00:42:56.310 "base_bdev": "nvme0n1", 00:42:56.310 "thin_provision": true, 00:42:56.310 "num_allocated_clusters": 0, 00:42:56.310 "snapshot": false, 00:42:56.310 "clone": false, 00:42:56.310 "esnap_clone": false 00:42:56.310 } 00:42:56.310 } 00:42:56.310 } 00:42:56.310 ]' 00:42:56.310 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:56.310 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:42:56.310 14:03:03 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:42:56.310 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:42:56.570 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:56.830 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:56.830 { 00:42:56.830 "name": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:56.830 "aliases": [ 00:42:56.830 "lvs/nvme0n1p0" 00:42:56.830 ], 00:42:56.830 "product_name": "Logical Volume", 00:42:56.830 "block_size": 4096, 00:42:56.830 "num_blocks": 26476544, 00:42:56.830 "uuid": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:56.830 "assigned_rate_limits": { 00:42:56.830 "rw_ios_per_sec": 0, 00:42:56.830 "rw_mbytes_per_sec": 0, 00:42:56.830 "r_mbytes_per_sec": 0, 00:42:56.830 "w_mbytes_per_sec": 0 00:42:56.830 }, 00:42:56.830 "claimed": false, 00:42:56.830 "zoned": false, 00:42:56.830 "supported_io_types": { 00:42:56.830 "read": true, 00:42:56.830 "write": true, 00:42:56.830 "unmap": true, 00:42:56.830 "flush": false, 00:42:56.830 "reset": true, 00:42:56.830 "nvme_admin": false, 00:42:56.830 "nvme_io": false, 00:42:56.830 "nvme_io_md": false, 00:42:56.830 "write_zeroes": true, 00:42:56.830 "zcopy": false, 00:42:56.830 "get_zone_info": false, 00:42:56.830 "zone_management": false, 00:42:56.830 "zone_append": false, 00:42:56.830 "compare": false, 00:42:56.830 "compare_and_write": false, 00:42:56.830 "abort": false, 00:42:56.830 "seek_hole": true, 00:42:56.830 "seek_data": true, 00:42:56.830 "copy": false, 00:42:56.830 "nvme_iov_md": false 00:42:56.830 }, 00:42:56.830 "driver_specific": { 00:42:56.830 "lvol": { 00:42:56.830 "lvol_store_uuid": "019b54ac-8804-4822-901c-1106fd630e39", 00:42:56.830 "base_bdev": "nvme0n1", 00:42:56.830 "thin_provision": true, 00:42:56.830 "num_allocated_clusters": 0, 00:42:56.830 "snapshot": false, 00:42:56.830 "clone": false, 00:42:56.830 "esnap_clone": false 00:42:56.830 } 00:42:56.830 } 00:42:56.830 } 00:42:56.830 ]' 00:42:56.830 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:56.830 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:42:57.090 14:03:04 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:42:57.349 14:03:04 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 00:42:57.349 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:42:57.349 { 00:42:57.349 "name": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:57.349 "aliases": [ 00:42:57.349 "lvs/nvme0n1p0" 00:42:57.349 ], 00:42:57.349 "product_name": "Logical Volume", 00:42:57.349 "block_size": 4096, 00:42:57.349 "num_blocks": 26476544, 00:42:57.349 "uuid": "88d7d873-d60a-43a8-b73c-2ac4cb5f7672", 00:42:57.349 "assigned_rate_limits": { 00:42:57.349 "rw_ios_per_sec": 0, 00:42:57.349 "rw_mbytes_per_sec": 0, 00:42:57.349 "r_mbytes_per_sec": 0, 00:42:57.349 "w_mbytes_per_sec": 0 00:42:57.349 }, 00:42:57.349 "claimed": false, 00:42:57.349 "zoned": false, 00:42:57.349 "supported_io_types": { 00:42:57.349 "read": true, 00:42:57.349 "write": true, 00:42:57.349 "unmap": true, 00:42:57.349 "flush": false, 00:42:57.349 "reset": true, 00:42:57.349 "nvme_admin": false, 00:42:57.349 "nvme_io": false, 00:42:57.349 "nvme_io_md": false, 00:42:57.349 "write_zeroes": true, 00:42:57.349 "zcopy": false, 00:42:57.349 "get_zone_info": false, 00:42:57.349 "zone_management": false, 00:42:57.349 "zone_append": false, 00:42:57.349 "compare": false, 00:42:57.349 "compare_and_write": false, 00:42:57.349 "abort": false, 00:42:57.349 "seek_hole": true, 00:42:57.349 "seek_data": true, 00:42:57.349 "copy": false, 00:42:57.349 "nvme_iov_md": false 00:42:57.349 }, 00:42:57.349 "driver_specific": { 00:42:57.349 "lvol": { 00:42:57.349 "lvol_store_uuid": "019b54ac-8804-4822-901c-1106fd630e39", 00:42:57.349 "base_bdev": "nvme0n1", 00:42:57.349 "thin_provision": true, 00:42:57.349 "num_allocated_clusters": 0, 00:42:57.349 "snapshot": false, 00:42:57.349 "clone": false, 00:42:57.349 "esnap_clone": false 00:42:57.349 } 00:42:57.349 } 00:42:57.349 } 00:42:57.349 ]' 00:42:57.349 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:42:57.349 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:42:57.349 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:42:57.610 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:42:57.610 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:42:57.610 14:03:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 --l2p_dram_limit 10' 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:42:57.611 14:03:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 88d7d873-d60a-43a8-b73c-2ac4cb5f7672 --l2p_dram_limit 10 -c nvc0n1p0 00:42:57.611 [2024-11-20 14:03:05.268054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.268201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:42:57.611 [2024-11-20 14:03:05.268222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:42:57.611 [2024-11-20 14:03:05.268232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.268313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.268326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:42:57.611 [2024-11-20 14:03:05.268336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:42:57.611 [2024-11-20 14:03:05.268344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.268367] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:42:57.611 [2024-11-20 14:03:05.269418] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:42:57.611 [2024-11-20 14:03:05.269450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.269459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:42:57.611 [2024-11-20 14:03:05.269470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:42:57.611 [2024-11-20 14:03:05.269478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.269553] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10 00:42:57.611 [2024-11-20 14:03:05.270938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.270966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:42:57.611 [2024-11-20 14:03:05.270977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:42:57.611 [2024-11-20 14:03:05.270988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.278359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.278432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:42:57.611 [2024-11-20 14:03:05.278460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.331 ms 00:42:57.611 [2024-11-20 14:03:05.278484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.278599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.278633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:42:57.611 [2024-11-20 14:03:05.278670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:42:57.611 [2024-11-20 14:03:05.278697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.278807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.278847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:42:57.611 [2024-11-20 14:03:05.278881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:42:57.611 [2024-11-20 14:03:05.278907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.278949] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:42:57.611 [2024-11-20 14:03:05.284124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.284192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:42:57.611 [2024-11-20 14:03:05.284229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.193 ms 00:42:57.611 [2024-11-20 14:03:05.284250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.284304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.284335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:42:57.611 [2024-11-20 14:03:05.284386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:42:57.611 [2024-11-20 14:03:05.284420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.284480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:42:57.611 [2024-11-20 14:03:05.284641] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:42:57.611 [2024-11-20 14:03:05.284692] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:42:57.611 [2024-11-20 14:03:05.284791] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:42:57.611 [2024-11-20 14:03:05.284844] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:42:57.611 [2024-11-20 14:03:05.284892] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:42:57.611 [2024-11-20 14:03:05.284942] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:42:57.611 [2024-11-20 14:03:05.284971] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:42:57.611 [2024-11-20 14:03:05.284998] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:42:57.611 [2024-11-20 14:03:05.285020] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:42:57.611 [2024-11-20 14:03:05.285065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.285097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:42:57.611 [2024-11-20 14:03:05.285144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.588 ms 00:42:57.611 [2024-11-20 14:03:05.285167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.285244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.611 [2024-11-20 14:03:05.285254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:42:57.611 [2024-11-20 14:03:05.285265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:42:57.611 [2024-11-20 14:03:05.285272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.611 [2024-11-20 14:03:05.285364] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:42:57.611 [2024-11-20 14:03:05.285377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:42:57.611 [2024-11-20 14:03:05.285387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:57.611 [2024-11-20 14:03:05.285396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:42:57.611 [2024-11-20 14:03:05.285414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:42:57.611 [2024-11-20 14:03:05.285429] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:42:57.611 [2024-11-20 14:03:05.285438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:57.611 [2024-11-20 14:03:05.285452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:42:57.611 [2024-11-20 14:03:05.285460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:42:57.611 [2024-11-20 14:03:05.285468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:42:57.611 [2024-11-20 14:03:05.285474] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:42:57.611 [2024-11-20 14:03:05.285483] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:42:57.611 [2024-11-20 14:03:05.285490] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:42:57.611 [2024-11-20 14:03:05.285508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:42:57.611 [2024-11-20 14:03:05.285518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:42:57.611 [2024-11-20 14:03:05.285533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:42:57.611 [2024-11-20 14:03:05.285539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:42:57.612 [2024-11-20 14:03:05.285554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285562] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:42:57.612 [2024-11-20 14:03:05.285577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:42:57.612 [2024-11-20 14:03:05.285599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285607] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:42:57.612 [2024-11-20 14:03:05.285624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:57.612 [2024-11-20 14:03:05.285637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:42:57.612 [2024-11-20 14:03:05.285644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:42:57.612 [2024-11-20 14:03:05.285651] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:42:57.612 [2024-11-20 14:03:05.285657] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:42:57.612 [2024-11-20 14:03:05.285666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:42:57.612 [2024-11-20 14:03:05.285672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:42:57.612 [2024-11-20 14:03:05.285686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:42:57.612 [2024-11-20 14:03:05.285694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285702] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:42:57.612 [2024-11-20 14:03:05.285711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:42:57.612 [2024-11-20 14:03:05.285728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:42:57.612 [2024-11-20 14:03:05.285748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:42:57.612 [2024-11-20 14:03:05.285758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:42:57.612 [2024-11-20 14:03:05.285766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:42:57.612 [2024-11-20 14:03:05.285774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:42:57.612 [2024-11-20 14:03:05.285781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:42:57.612 [2024-11-20 14:03:05.285790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:42:57.612 [2024-11-20 14:03:05.285800] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:42:57.612 [2024-11-20 14:03:05.285811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:42:57.612 [2024-11-20 14:03:05.285831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:42:57.612 [2024-11-20 14:03:05.285839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:42:57.612 [2024-11-20 14:03:05.285848] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:42:57.612 [2024-11-20 14:03:05.285855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:42:57.612 [2024-11-20 14:03:05.285863] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:42:57.612 [2024-11-20 14:03:05.285871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:42:57.612 [2024-11-20 14:03:05.285880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:42:57.612 [2024-11-20 14:03:05.285887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:42:57.612 [2024-11-20 14:03:05.285897] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285913] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285920] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:42:57.612 [2024-11-20 14:03:05.285937] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:42:57.612 [2024-11-20 14:03:05.285947] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285954] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:42:57.612 [2024-11-20 14:03:05.285963] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:42:57.612 [2024-11-20 14:03:05.285970] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:42:57.612 [2024-11-20 14:03:05.285978] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:42:57.612 [2024-11-20 14:03:05.285985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:42:57.612 [2024-11-20 14:03:05.285995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:42:57.612 [2024-11-20 14:03:05.286002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:42:57.612 [2024-11-20 14:03:05.286012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:42:57.612 [2024-11-20 14:03:05.286053] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:42:57.612 [2024-11-20 14:03:05.286067] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:43:00.911 [2024-11-20 14:03:08.512105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.911 [2024-11-20 14:03:08.512172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:43:00.911 [2024-11-20 14:03:08.512186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3232.272 ms 00:43:00.911 [2024-11-20 14:03:08.512198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.911 [2024-11-20 14:03:08.549885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.911 [2024-11-20 14:03:08.549940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:43:00.911 [2024-11-20 14:03:08.549954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.425 ms 00:43:00.911 [2024-11-20 14:03:08.549964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.911 [2024-11-20 14:03:08.550117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.911 [2024-11-20 14:03:08.550132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:43:00.911 [2024-11-20 14:03:08.550141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:43:00.911 [2024-11-20 14:03:08.550156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.911 [2024-11-20 14:03:08.595696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.912 [2024-11-20 14:03:08.595768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:43:00.912 [2024-11-20 14:03:08.595781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.591 ms 00:43:00.912 [2024-11-20 14:03:08.595790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.912 [2024-11-20 14:03:08.595836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.912 [2024-11-20 14:03:08.595852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:43:00.912 [2024-11-20 14:03:08.595860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:43:00.912 [2024-11-20 14:03:08.595870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.912 [2024-11-20 14:03:08.596368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.912 [2024-11-20 14:03:08.596386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:43:00.912 [2024-11-20 14:03:08.596395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:43:00.912 [2024-11-20 14:03:08.596405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.912 [2024-11-20 14:03:08.596501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.912 [2024-11-20 14:03:08.596512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:43:00.912 [2024-11-20 14:03:08.596522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:43:00.912 [2024-11-20 14:03:08.596533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:00.912 [2024-11-20 14:03:08.616098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:00.912 [2024-11-20 14:03:08.616153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:43:00.912 [2024-11-20 14:03:08.616167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.583 ms 00:43:00.912 [2024-11-20 14:03:08.616177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.640158] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:43:01.172 [2024-11-20 14:03:08.643568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.643604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:43:01.172 [2024-11-20 14:03:08.643620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.325 ms 00:43:01.172 [2024-11-20 14:03:08.643628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.731139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.731201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:43:01.172 [2024-11-20 14:03:08.731218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.627 ms 00:43:01.172 [2024-11-20 14:03:08.731227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.731436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.731451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:43:01.172 [2024-11-20 14:03:08.731465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:43:01.172 [2024-11-20 14:03:08.731473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.768434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.768563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:43:01.172 [2024-11-20 14:03:08.768584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.981 ms 00:43:01.172 [2024-11-20 14:03:08.768593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.803323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.803365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:43:01.172 [2024-11-20 14:03:08.803380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.749 ms 00:43:01.172 [2024-11-20 14:03:08.803388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.172 [2024-11-20 14:03:08.804161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.172 [2024-11-20 14:03:08.804178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:43:01.172 [2024-11-20 14:03:08.804190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.733 ms 00:43:01.172 [2024-11-20 14:03:08.804202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:08.913071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:08.913155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:43:01.432 [2024-11-20 14:03:08.913189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 109.000 ms 00:43:01.432 [2024-11-20 14:03:08.913198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:08.952298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:08.952363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:43:01.432 [2024-11-20 14:03:08.952381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.045 ms 00:43:01.432 [2024-11-20 14:03:08.952390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:08.993359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:08.993424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:43:01.432 [2024-11-20 14:03:08.993442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.983 ms 00:43:01.432 [2024-11-20 14:03:08.993450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:09.034474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:09.034532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:43:01.432 [2024-11-20 14:03:09.034548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.036 ms 00:43:01.432 [2024-11-20 14:03:09.034556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:09.034610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:09.034620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:43:01.432 [2024-11-20 14:03:09.034636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:43:01.432 [2024-11-20 14:03:09.034644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:09.034779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:43:01.432 [2024-11-20 14:03:09.034792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:43:01.432 [2024-11-20 14:03:09.034809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:43:01.432 [2024-11-20 14:03:09.034818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:43:01.432 [2024-11-20 14:03:09.036025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3774.665 ms, result 0 00:43:01.432 { 00:43:01.432 "name": "ftl0", 00:43:01.432 "uuid": "a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10" 00:43:01.432 } 00:43:01.432 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:43:01.432 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:43:01.691 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:43:01.691 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:43:01.691 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:43:01.949 /dev/nbd0 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:43:01.949 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:43:01.950 1+0 records in 00:43:01.950 1+0 records out 00:43:01.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310354 s, 13.2 MB/s 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:43:01.950 14:03:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:43:02.209 [2024-11-20 14:03:09.675015] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:43:02.209 [2024-11-20 14:03:09.675221] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81957 ] 00:43:02.210 [2024-11-20 14:03:09.855801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:02.469 [2024-11-20 14:03:09.984626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:03.850  [2024-11-20T14:03:12.509Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-20T14:03:13.448Z] Copying: 443/1024 [MB] (225 MBps) [2024-11-20T14:03:14.440Z] Copying: 656/1024 [MB] (213 MBps) [2024-11-20T14:03:15.013Z] Copying: 877/1024 [MB] (220 MBps) [2024-11-20T14:03:16.395Z] Copying: 1024/1024 [MB] (average 219 MBps) 00:43:08.676 00:43:08.676 14:03:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:43:10.586 14:03:17 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:43:10.586 [2024-11-20 14:03:18.082072] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:43:10.586 [2024-11-20 14:03:18.082182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82043 ] 00:43:10.586 [2024-11-20 14:03:18.257800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:10.846 [2024-11-20 14:03:18.374903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:12.224  [2024-11-20T14:03:20.881Z] Copying: 21/1024 [MB] (21 MBps) [2024-11-20T14:03:21.820Z] Copying: 44/1024 [MB] (22 MBps) [2024-11-20T14:03:22.766Z] Copying: 67/1024 [MB] (22 MBps) [2024-11-20T14:03:23.704Z] Copying: 88/1024 [MB] (21 MBps) [2024-11-20T14:03:25.084Z] Copying: 109/1024 [MB] (21 MBps) [2024-11-20T14:03:26.023Z] Copying: 129/1024 [MB] (19 MBps) [2024-11-20T14:03:26.961Z] Copying: 148/1024 [MB] (19 MBps) [2024-11-20T14:03:27.898Z] Copying: 168/1024 [MB] (19 MBps) [2024-11-20T14:03:28.836Z] Copying: 187/1024 [MB] (19 MBps) [2024-11-20T14:03:29.799Z] Copying: 209/1024 [MB] (21 MBps) [2024-11-20T14:03:30.737Z] Copying: 231/1024 [MB] (21 MBps) [2024-11-20T14:03:32.116Z] Copying: 252/1024 [MB] (20 MBps) [2024-11-20T14:03:32.684Z] Copying: 272/1024 [MB] (20 MBps) [2024-11-20T14:03:34.066Z] Copying: 294/1024 [MB] (21 MBps) [2024-11-20T14:03:35.005Z] Copying: 314/1024 [MB] (20 MBps) [2024-11-20T14:03:35.942Z] Copying: 335/1024 [MB] (20 MBps) [2024-11-20T14:03:36.886Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-20T14:03:37.827Z] Copying: 380/1024 [MB] (22 MBps) [2024-11-20T14:03:38.767Z] Copying: 400/1024 [MB] (20 MBps) [2024-11-20T14:03:39.757Z] Copying: 420/1024 [MB] (20 MBps) [2024-11-20T14:03:40.696Z] Copying: 440/1024 [MB] (20 MBps) [2024-11-20T14:03:42.072Z] Copying: 461/1024 [MB] (20 MBps) [2024-11-20T14:03:43.009Z] Copying: 482/1024 [MB] (20 MBps) [2024-11-20T14:03:43.946Z] Copying: 502/1024 [MB] (20 MBps) [2024-11-20T14:03:44.929Z] Copying: 522/1024 [MB] (19 MBps) [2024-11-20T14:03:45.865Z] Copying: 542/1024 [MB] (20 MBps) [2024-11-20T14:03:46.802Z] Copying: 563/1024 [MB] (20 MBps) [2024-11-20T14:03:47.740Z] Copying: 583/1024 [MB] (20 MBps) [2024-11-20T14:03:48.679Z] Copying: 603/1024 [MB] (19 MBps) [2024-11-20T14:03:50.060Z] Copying: 623/1024 [MB] (19 MBps) [2024-11-20T14:03:50.999Z] Copying: 642/1024 [MB] (19 MBps) [2024-11-20T14:03:51.939Z] Copying: 663/1024 [MB] (20 MBps) [2024-11-20T14:03:52.880Z] Copying: 683/1024 [MB] (19 MBps) [2024-11-20T14:03:53.817Z] Copying: 702/1024 [MB] (19 MBps) [2024-11-20T14:03:54.755Z] Copying: 722/1024 [MB] (19 MBps) [2024-11-20T14:03:55.693Z] Copying: 741/1024 [MB] (19 MBps) [2024-11-20T14:03:56.644Z] Copying: 762/1024 [MB] (20 MBps) [2024-11-20T14:03:58.023Z] Copying: 784/1024 [MB] (22 MBps) [2024-11-20T14:03:58.963Z] Copying: 806/1024 [MB] (21 MBps) [2024-11-20T14:03:59.901Z] Copying: 829/1024 [MB] (23 MBps) [2024-11-20T14:04:00.839Z] Copying: 853/1024 [MB] (23 MBps) [2024-11-20T14:04:01.778Z] Copying: 876/1024 [MB] (23 MBps) [2024-11-20T14:04:02.721Z] Copying: 899/1024 [MB] (23 MBps) [2024-11-20T14:04:03.658Z] Copying: 923/1024 [MB] (23 MBps) [2024-11-20T14:04:05.038Z] Copying: 947/1024 [MB] (24 MBps) [2024-11-20T14:04:05.977Z] Copying: 971/1024 [MB] (23 MBps) [2024-11-20T14:04:06.916Z] Copying: 994/1024 [MB] (22 MBps) [2024-11-20T14:04:06.916Z] Copying: 1017/1024 [MB] (22 MBps) [2024-11-20T14:04:08.297Z] Copying: 1024/1024 [MB] (average 21 MBps) 00:44:00.578 00:44:00.578 14:04:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:44:00.578 14:04:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:44:00.839 14:04:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:44:01.099 [2024-11-20 14:04:08.660160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.660357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:01.100 [2024-11-20 14:04:08.660404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:01.100 [2024-11-20 14:04:08.660433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.660482] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:01.100 [2024-11-20 14:04:08.666106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.666217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:01.100 [2024-11-20 14:04:08.666258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.538 ms 00:44:01.100 [2024-11-20 14:04:08.666283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.668749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.668848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:01.100 [2024-11-20 14:04:08.668870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.377 ms 00:44:01.100 [2024-11-20 14:04:08.668880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.686663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.686764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:01.100 [2024-11-20 14:04:08.686782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.777 ms 00:44:01.100 [2024-11-20 14:04:08.686791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.691941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.691990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:01.100 [2024-11-20 14:04:08.692006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:44:01.100 [2024-11-20 14:04:08.692015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.736714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.736813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:01.100 [2024-11-20 14:04:08.736849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.657 ms 00:44:01.100 [2024-11-20 14:04:08.736858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.764309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.764409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:01.100 [2024-11-20 14:04:08.764447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.378 ms 00:44:01.100 [2024-11-20 14:04:08.764460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.764750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.764765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:01.100 [2024-11-20 14:04:08.764778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:44:01.100 [2024-11-20 14:04:08.764786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.100 [2024-11-20 14:04:08.808376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.100 [2024-11-20 14:04:08.808468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:01.100 [2024-11-20 14:04:08.808488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.637 ms 00:44:01.100 [2024-11-20 14:04:08.808512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.361 [2024-11-20 14:04:08.850933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.361 [2024-11-20 14:04:08.851145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:01.361 [2024-11-20 14:04:08.851187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.391 ms 00:44:01.361 [2024-11-20 14:04:08.851209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.361 [2024-11-20 14:04:08.893134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.361 [2024-11-20 14:04:08.893333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:01.361 [2024-11-20 14:04:08.893354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.868 ms 00:44:01.361 [2024-11-20 14:04:08.893363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.361 [2024-11-20 14:04:08.934300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.361 [2024-11-20 14:04:08.934383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:01.361 [2024-11-20 14:04:08.934418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.802 ms 00:44:01.361 [2024-11-20 14:04:08.934427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.361 [2024-11-20 14:04:08.934515] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:01.361 [2024-11-20 14:04:08.934533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:01.361 [2024-11-20 14:04:08.934826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.934993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:01.362 [2024-11-20 14:04:08.935625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:01.363 [2024-11-20 14:04:08.935637] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10 00:44:01.363 [2024-11-20 14:04:08.935647] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:44:01.363 [2024-11-20 14:04:08.935661] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:44:01.363 [2024-11-20 14:04:08.935670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:44:01.363 [2024-11-20 14:04:08.935687] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:44:01.363 [2024-11-20 14:04:08.935696] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:01.363 [2024-11-20 14:04:08.935708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:01.363 [2024-11-20 14:04:08.935717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:01.363 [2024-11-20 14:04:08.935728] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:01.363 [2024-11-20 14:04:08.935744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:01.363 [2024-11-20 14:04:08.935767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.363 [2024-11-20 14:04:08.935777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:01.363 [2024-11-20 14:04:08.935789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.256 ms 00:44:01.363 [2024-11-20 14:04:08.935798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:08.958708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.363 [2024-11-20 14:04:08.958801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:01.363 [2024-11-20 14:04:08.958819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.836 ms 00:44:01.363 [2024-11-20 14:04:08.958828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:08.959550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:01.363 [2024-11-20 14:04:08.959568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:01.363 [2024-11-20 14:04:08.959580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:44:01.363 [2024-11-20 14:04:08.959588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:09.030877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.363 [2024-11-20 14:04:09.030963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:01.363 [2024-11-20 14:04:09.030980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.363 [2024-11-20 14:04:09.030988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:09.031087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.363 [2024-11-20 14:04:09.031097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:01.363 [2024-11-20 14:04:09.031107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.363 [2024-11-20 14:04:09.031116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:09.031258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.363 [2024-11-20 14:04:09.031275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:01.363 [2024-11-20 14:04:09.031286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.363 [2024-11-20 14:04:09.031294] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.363 [2024-11-20 14:04:09.031321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.363 [2024-11-20 14:04:09.031330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:01.363 [2024-11-20 14:04:09.031340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.363 [2024-11-20 14:04:09.031348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.173666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.173744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:01.624 [2024-11-20 14:04:09.173764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.173774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:01.624 [2024-11-20 14:04:09.290156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:01.624 [2024-11-20 14:04:09.290351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:01.624 [2024-11-20 14:04:09.290457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:01.624 [2024-11-20 14:04:09.290609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:01.624 [2024-11-20 14:04:09.290689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:01.624 [2024-11-20 14:04:09.290793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.290870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:01.624 [2024-11-20 14:04:09.290879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:01.624 [2024-11-20 14:04:09.290889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:01.624 [2024-11-20 14:04:09.290896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:01.624 [2024-11-20 14:04:09.291077] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 632.087 ms, result 0 00:44:01.624 true 00:44:01.624 14:04:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81808 00:44:01.624 14:04:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81808 00:44:01.624 14:04:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:44:01.884 [2024-11-20 14:04:09.406517] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:44:01.884 [2024-11-20 14:04:09.406662] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82558 ] 00:44:01.884 [2024-11-20 14:04:09.577454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:02.144 [2024-11-20 14:04:09.723902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:03.524  [2024-11-20T14:04:12.181Z] Copying: 225/1024 [MB] (225 MBps) [2024-11-20T14:04:13.119Z] Copying: 448/1024 [MB] (223 MBps) [2024-11-20T14:04:14.514Z] Copying: 655/1024 [MB] (207 MBps) [2024-11-20T14:04:15.082Z] Copying: 861/1024 [MB] (206 MBps) [2024-11-20T14:04:16.463Z] Copying: 1024/1024 [MB] (average 214 MBps) 00:44:08.744 00:44:08.744 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81808 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:44:08.744 14:04:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:08.744 [2024-11-20 14:04:16.214631] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:44:08.744 [2024-11-20 14:04:16.215377] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82626 ] 00:44:08.744 [2024-11-20 14:04:16.395736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:09.004 [2024-11-20 14:04:16.530783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:09.268 [2024-11-20 14:04:16.946559] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:09.268 [2024-11-20 14:04:16.946638] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:09.535 [2024-11-20 14:04:17.013486] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:44:09.535 [2024-11-20 14:04:17.013812] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:44:09.535 [2024-11-20 14:04:17.014041] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:44:09.795 [2024-11-20 14:04:17.282910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.795 [2024-11-20 14:04:17.282964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:09.795 [2024-11-20 14:04:17.282978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:09.795 [2024-11-20 14:04:17.282986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.795 [2024-11-20 14:04:17.283042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.795 [2024-11-20 14:04:17.283052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:09.795 [2024-11-20 14:04:17.283061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:44:09.795 [2024-11-20 14:04:17.283068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.795 [2024-11-20 14:04:17.283088] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:09.796 [2024-11-20 14:04:17.284060] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:09.796 [2024-11-20 14:04:17.284081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.284090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:09.796 [2024-11-20 14:04:17.284098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.000 ms 00:44:09.796 [2024-11-20 14:04:17.284106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.286524] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:09.796 [2024-11-20 14:04:17.307382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.307428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:09.796 [2024-11-20 14:04:17.307444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.899 ms 00:44:09.796 [2024-11-20 14:04:17.307452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.307528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.307539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:09.796 [2024-11-20 14:04:17.307549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:44:09.796 [2024-11-20 14:04:17.307556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.320559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.320597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:09.796 [2024-11-20 14:04:17.320609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.953 ms 00:44:09.796 [2024-11-20 14:04:17.320618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.320712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.320746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:09.796 [2024-11-20 14:04:17.320772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:44:09.796 [2024-11-20 14:04:17.320780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.320861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.320873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:09.796 [2024-11-20 14:04:17.320882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:44:09.796 [2024-11-20 14:04:17.320891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.320919] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:09.796 [2024-11-20 14:04:17.326964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.326991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:09.796 [2024-11-20 14:04:17.327001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.065 ms 00:44:09.796 [2024-11-20 14:04:17.327009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.327041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.327050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:09.796 [2024-11-20 14:04:17.327058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:09.796 [2024-11-20 14:04:17.327066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.327108] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:09.796 [2024-11-20 14:04:17.327131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:09.796 [2024-11-20 14:04:17.327175] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:09.796 [2024-11-20 14:04:17.327191] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:09.796 [2024-11-20 14:04:17.327284] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:09.796 [2024-11-20 14:04:17.327295] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:09.796 [2024-11-20 14:04:17.327305] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:09.796 [2024-11-20 14:04:17.327317] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327330] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327339] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:09.796 [2024-11-20 14:04:17.327347] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:09.796 [2024-11-20 14:04:17.327355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:09.796 [2024-11-20 14:04:17.327364] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:09.796 [2024-11-20 14:04:17.327372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.327379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:09.796 [2024-11-20 14:04:17.327388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:44:09.796 [2024-11-20 14:04:17.327396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.327467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.796 [2024-11-20 14:04:17.327480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:09.796 [2024-11-20 14:04:17.327487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:44:09.796 [2024-11-20 14:04:17.327495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.796 [2024-11-20 14:04:17.327599] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:09.796 [2024-11-20 14:04:17.327613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:09.796 [2024-11-20 14:04:17.327622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:09.796 [2024-11-20 14:04:17.327647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:09.796 [2024-11-20 14:04:17.327671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:09.796 [2024-11-20 14:04:17.327686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:09.796 [2024-11-20 14:04:17.327704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:09.796 [2024-11-20 14:04:17.327712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:09.796 [2024-11-20 14:04:17.327730] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:09.796 [2024-11-20 14:04:17.327738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:09.796 [2024-11-20 14:04:17.327746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:09.796 [2024-11-20 14:04:17.327769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327785] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:09.796 [2024-11-20 14:04:17.327792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:09.796 [2024-11-20 14:04:17.327831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:09.796 [2024-11-20 14:04:17.327854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:09.796 [2024-11-20 14:04:17.327878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:09.796 [2024-11-20 14:04:17.327893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:09.796 [2024-11-20 14:04:17.327902] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:09.796 [2024-11-20 14:04:17.327916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:09.796 [2024-11-20 14:04:17.327925] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:09.796 [2024-11-20 14:04:17.327932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:09.796 [2024-11-20 14:04:17.327940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:09.796 [2024-11-20 14:04:17.327947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:09.796 [2024-11-20 14:04:17.327954] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:09.796 [2024-11-20 14:04:17.327970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:09.796 [2024-11-20 14:04:17.327979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.796 [2024-11-20 14:04:17.327987] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:09.796 [2024-11-20 14:04:17.327996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:09.796 [2024-11-20 14:04:17.328004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:09.797 [2024-11-20 14:04:17.328016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:09.797 [2024-11-20 14:04:17.328026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:09.797 [2024-11-20 14:04:17.328035] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:09.797 [2024-11-20 14:04:17.328043] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:09.797 [2024-11-20 14:04:17.328051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:09.797 [2024-11-20 14:04:17.328059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:09.797 [2024-11-20 14:04:17.328067] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:09.797 [2024-11-20 14:04:17.328077] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:09.797 [2024-11-20 14:04:17.328087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:09.797 [2024-11-20 14:04:17.328105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:09.797 [2024-11-20 14:04:17.328114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:09.797 [2024-11-20 14:04:17.328123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:09.797 [2024-11-20 14:04:17.328132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:09.797 [2024-11-20 14:04:17.328140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:09.797 [2024-11-20 14:04:17.328149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:09.797 [2024-11-20 14:04:17.328156] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:09.797 [2024-11-20 14:04:17.328165] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:09.797 [2024-11-20 14:04:17.328172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:09.797 [2024-11-20 14:04:17.328212] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:09.797 [2024-11-20 14:04:17.328221] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:09.797 [2024-11-20 14:04:17.328242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:09.797 [2024-11-20 14:04:17.328251] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:09.797 [2024-11-20 14:04:17.328260] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:09.797 [2024-11-20 14:04:17.328269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.328277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:09.797 [2024-11-20 14:04:17.328286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.728 ms 00:44:09.797 [2024-11-20 14:04:17.328295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.378746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.378801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:09.797 [2024-11-20 14:04:17.378817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.488 ms 00:44:09.797 [2024-11-20 14:04:17.378825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.378959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.378976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:09.797 [2024-11-20 14:04:17.378985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:44:09.797 [2024-11-20 14:04:17.378994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.443910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.443956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:09.797 [2024-11-20 14:04:17.443977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.928 ms 00:44:09.797 [2024-11-20 14:04:17.443986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.444061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.444073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:09.797 [2024-11-20 14:04:17.444083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:09.797 [2024-11-20 14:04:17.444092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.444925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.444940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:09.797 [2024-11-20 14:04:17.444951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.757 ms 00:44:09.797 [2024-11-20 14:04:17.444959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.445104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.445119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:09.797 [2024-11-20 14:04:17.445129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.115 ms 00:44:09.797 [2024-11-20 14:04:17.445137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.466773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.466812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:09.797 [2024-11-20 14:04:17.466825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.651 ms 00:44:09.797 [2024-11-20 14:04:17.466834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:09.797 [2024-11-20 14:04:17.487273] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:44:09.797 [2024-11-20 14:04:17.487320] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:09.797 [2024-11-20 14:04:17.487352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:09.797 [2024-11-20 14:04:17.487362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:09.797 [2024-11-20 14:04:17.487375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.408 ms 00:44:09.797 [2024-11-20 14:04:17.487382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.517502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.517549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:10.057 [2024-11-20 14:04:17.517577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.125 ms 00:44:10.057 [2024-11-20 14:04:17.517586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.536186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.536230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:10.057 [2024-11-20 14:04:17.536243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.582 ms 00:44:10.057 [2024-11-20 14:04:17.536251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.553296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.553355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:10.057 [2024-11-20 14:04:17.553367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.033 ms 00:44:10.057 [2024-11-20 14:04:17.553376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.554194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.554222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:10.057 [2024-11-20 14:04:17.554232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:44:10.057 [2024-11-20 14:04:17.554239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.648684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.648810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:10.057 [2024-11-20 14:04:17.648827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.604 ms 00:44:10.057 [2024-11-20 14:04:17.648837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.660091] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:10.057 [2024-11-20 14:04:17.665119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.665154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:10.057 [2024-11-20 14:04:17.665183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.244 ms 00:44:10.057 [2024-11-20 14:04:17.665192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.665322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.665334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:10.057 [2024-11-20 14:04:17.665344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:44:10.057 [2024-11-20 14:04:17.665351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.665432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.665443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:10.057 [2024-11-20 14:04:17.665451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:44:10.057 [2024-11-20 14:04:17.665459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.665478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.665492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:10.057 [2024-11-20 14:04:17.665501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:10.057 [2024-11-20 14:04:17.665509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.665546] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:10.057 [2024-11-20 14:04:17.665572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.665580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:10.057 [2024-11-20 14:04:17.665589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:44:10.057 [2024-11-20 14:04:17.665596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.703593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.703644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:10.057 [2024-11-20 14:04:17.703660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.044 ms 00:44:10.057 [2024-11-20 14:04:17.703668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.703775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:10.057 [2024-11-20 14:04:17.703804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:10.057 [2024-11-20 14:04:17.703814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:44:10.057 [2024-11-20 14:04:17.703822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:10.057 [2024-11-20 14:04:17.705377] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 422.761 ms, result 0 00:44:11.436  [2024-11-20T14:04:19.721Z] Copying: 32/1024 [MB] (32 MBps) [2024-11-20T14:04:21.098Z] Copying: 64/1024 [MB] (31 MBps) [2024-11-20T14:04:22.034Z] Copying: 97/1024 [MB] (32 MBps) [2024-11-20T14:04:22.972Z] Copying: 129/1024 [MB] (31 MBps) [2024-11-20T14:04:23.916Z] Copying: 160/1024 [MB] (31 MBps) [2024-11-20T14:04:24.866Z] Copying: 191/1024 [MB] (30 MBps) [2024-11-20T14:04:25.805Z] Copying: 223/1024 [MB] (31 MBps) [2024-11-20T14:04:26.742Z] Copying: 254/1024 [MB] (31 MBps) [2024-11-20T14:04:28.121Z] Copying: 285/1024 [MB] (30 MBps) [2024-11-20T14:04:29.060Z] Copying: 315/1024 [MB] (30 MBps) [2024-11-20T14:04:30.007Z] Copying: 346/1024 [MB] (30 MBps) [2024-11-20T14:04:30.946Z] Copying: 376/1024 [MB] (30 MBps) [2024-11-20T14:04:31.886Z] Copying: 407/1024 [MB] (30 MBps) [2024-11-20T14:04:32.825Z] Copying: 438/1024 [MB] (31 MBps) [2024-11-20T14:04:33.763Z] Copying: 469/1024 [MB] (30 MBps) [2024-11-20T14:04:34.700Z] Copying: 500/1024 [MB] (30 MBps) [2024-11-20T14:04:36.079Z] Copying: 530/1024 [MB] (30 MBps) [2024-11-20T14:04:37.016Z] Copying: 561/1024 [MB] (30 MBps) [2024-11-20T14:04:37.954Z] Copying: 592/1024 [MB] (30 MBps) [2024-11-20T14:04:38.892Z] Copying: 623/1024 [MB] (30 MBps) [2024-11-20T14:04:39.827Z] Copying: 654/1024 [MB] (30 MBps) [2024-11-20T14:04:40.766Z] Copying: 685/1024 [MB] (31 MBps) [2024-11-20T14:04:41.705Z] Copying: 716/1024 [MB] (30 MBps) [2024-11-20T14:04:43.083Z] Copying: 747/1024 [MB] (31 MBps) [2024-11-20T14:04:44.022Z] Copying: 781/1024 [MB] (34 MBps) [2024-11-20T14:04:44.977Z] Copying: 816/1024 [MB] (35 MBps) [2024-11-20T14:04:45.911Z] Copying: 851/1024 [MB] (34 MBps) [2024-11-20T14:04:46.845Z] Copying: 885/1024 [MB] (33 MBps) [2024-11-20T14:04:47.778Z] Copying: 918/1024 [MB] (32 MBps) [2024-11-20T14:04:48.712Z] Copying: 950/1024 [MB] (32 MBps) [2024-11-20T14:04:50.089Z] Copying: 982/1024 [MB] (32 MBps) [2024-11-20T14:04:50.657Z] Copying: 1015/1024 [MB] (32 MBps) [2024-11-20T14:04:50.917Z] Copying: 1048424/1048576 [kB] (8500 kBps) [2024-11-20T14:04:50.917Z] Copying: 1024/1024 [MB] (average 30 MBps)[2024-11-20 14:04:50.796004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.796111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:44:43.198 [2024-11-20 14:04:50.796130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:44:43.198 [2024-11-20 14:04:50.796158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.198 [2024-11-20 14:04:50.800554] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:44:43.198 [2024-11-20 14:04:50.806750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.806834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:44:43.198 [2024-11-20 14:04:50.806851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.126 ms 00:44:43.198 [2024-11-20 14:04:50.806876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.198 [2024-11-20 14:04:50.816999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.817097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:44:43.198 [2024-11-20 14:04:50.817117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.046 ms 00:44:43.198 [2024-11-20 14:04:50.817126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.198 [2024-11-20 14:04:50.841416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.841576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:44:43.198 [2024-11-20 14:04:50.841599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.309 ms 00:44:43.198 [2024-11-20 14:04:50.841610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.198 [2024-11-20 14:04:50.847092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.847164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:44:43.198 [2024-11-20 14:04:50.847194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.440 ms 00:44:43.198 [2024-11-20 14:04:50.847203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.198 [2024-11-20 14:04:50.895925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.198 [2024-11-20 14:04:50.896048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:44:43.198 [2024-11-20 14:04:50.896067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.693 ms 00:44:43.198 [2024-11-20 14:04:50.896076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.458 [2024-11-20 14:04:50.924086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.458 [2024-11-20 14:04:50.924189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:44:43.458 [2024-11-20 14:04:50.924208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.961 ms 00:44:43.458 [2024-11-20 14:04:50.924218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.458 [2024-11-20 14:04:50.999028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.458 [2024-11-20 14:04:50.999176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:44:43.458 [2024-11-20 14:04:50.999234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.833 ms 00:44:43.458 [2024-11-20 14:04:50.999245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.458 [2024-11-20 14:04:51.049749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.458 [2024-11-20 14:04:51.049894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:44:43.458 [2024-11-20 14:04:51.049914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.565 ms 00:44:43.458 [2024-11-20 14:04:51.049924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.458 [2024-11-20 14:04:51.098794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.458 [2024-11-20 14:04:51.098923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:44:43.458 [2024-11-20 14:04:51.098941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.852 ms 00:44:43.458 [2024-11-20 14:04:51.098950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.458 [2024-11-20 14:04:51.147114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.458 [2024-11-20 14:04:51.147245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:44:43.458 [2024-11-20 14:04:51.147262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.152 ms 00:44:43.458 [2024-11-20 14:04:51.147288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.718 [2024-11-20 14:04:51.196168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.718 [2024-11-20 14:04:51.196281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:44:43.718 [2024-11-20 14:04:51.196300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.769 ms 00:44:43.718 [2024-11-20 14:04:51.196309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.718 [2024-11-20 14:04:51.196411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:44:43.718 [2024-11-20 14:04:51.196432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118272 / 261120 wr_cnt: 1 state: open 00:44:43.718 [2024-11-20 14:04:51.196444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:44:43.718 [2024-11-20 14:04:51.196682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.196995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:44:43.719 [2024-11-20 14:04:51.197374] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:44:43.719 [2024-11-20 14:04:51.197383] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10 00:44:43.719 [2024-11-20 14:04:51.197393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118272 00:44:43.719 [2024-11-20 14:04:51.197414] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119232 00:44:43.719 [2024-11-20 14:04:51.197442] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118272 00:44:43.719 [2024-11-20 14:04:51.197453] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:44:43.719 [2024-11-20 14:04:51.197462] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:44:43.719 [2024-11-20 14:04:51.197472] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:44:43.719 [2024-11-20 14:04:51.197480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:44:43.719 [2024-11-20 14:04:51.197488] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:44:43.719 [2024-11-20 14:04:51.197495] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:44:43.719 [2024-11-20 14:04:51.197505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.719 [2024-11-20 14:04:51.197514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:44:43.719 [2024-11-20 14:04:51.197525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:44:43.719 [2024-11-20 14:04:51.197534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.719 [2024-11-20 14:04:51.222313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.719 [2024-11-20 14:04:51.222402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:44:43.719 [2024-11-20 14:04:51.222418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.762 ms 00:44:43.719 [2024-11-20 14:04:51.222427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.719 [2024-11-20 14:04:51.223213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:43.719 [2024-11-20 14:04:51.223232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:44:43.720 [2024-11-20 14:04:51.223242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.716 ms 00:44:43.720 [2024-11-20 14:04:51.223260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.720 [2024-11-20 14:04:51.285371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.720 [2024-11-20 14:04:51.285461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:43.720 [2024-11-20 14:04:51.285477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.720 [2024-11-20 14:04:51.285487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.720 [2024-11-20 14:04:51.285609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.720 [2024-11-20 14:04:51.285621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:43.720 [2024-11-20 14:04:51.285631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.720 [2024-11-20 14:04:51.285645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.720 [2024-11-20 14:04:51.285776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.720 [2024-11-20 14:04:51.285793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:43.720 [2024-11-20 14:04:51.285803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.720 [2024-11-20 14:04:51.285811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.720 [2024-11-20 14:04:51.285833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.720 [2024-11-20 14:04:51.285842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:43.720 [2024-11-20 14:04:51.285852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.720 [2024-11-20 14:04:51.285860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.435664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.435769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:43.978 [2024-11-20 14:04:51.435808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.435819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.566501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.566610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:43.978 [2024-11-20 14:04:51.566627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.566636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.566790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.566803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:43.978 [2024-11-20 14:04:51.566814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.566823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.566874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.566885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:43.978 [2024-11-20 14:04:51.566895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.566903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.567066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.567086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:43.978 [2024-11-20 14:04:51.567096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.567105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.567149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.567161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:44:43.978 [2024-11-20 14:04:51.567170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.567179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.567226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.567242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:43.978 [2024-11-20 14:04:51.567251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.567260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.567309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:44:43.978 [2024-11-20 14:04:51.567320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:43.978 [2024-11-20 14:04:51.567330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:44:43.978 [2024-11-20 14:04:51.567338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:43.978 [2024-11-20 14:04:51.567487] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 774.268 ms, result 0 00:44:45.880 00:44:45.880 00:44:46.139 14:04:53 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:44:48.061 14:04:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:44:48.061 [2024-11-20 14:04:55.731533] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:44:48.061 [2024-11-20 14:04:55.731679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83020 ] 00:44:48.320 [2024-11-20 14:04:55.903437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.579 [2024-11-20 14:04:56.041939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.837 [2024-11-20 14:04:56.459234] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:48.837 [2024-11-20 14:04:56.459326] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:44:49.097 [2024-11-20 14:04:56.621811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.621884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:44:49.097 [2024-11-20 14:04:56.621904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:44:49.097 [2024-11-20 14:04:56.621914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.621984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.621996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:44:49.097 [2024-11-20 14:04:56.622008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:44:49.097 [2024-11-20 14:04:56.622017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.622040] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:44:49.097 [2024-11-20 14:04:56.623243] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:44:49.097 [2024-11-20 14:04:56.623281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.623291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:44:49.097 [2024-11-20 14:04:56.623303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:44:49.097 [2024-11-20 14:04:56.623313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.624889] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:44:49.097 [2024-11-20 14:04:56.648065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.648143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:44:49.097 [2024-11-20 14:04:56.648159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.216 ms 00:44:49.097 [2024-11-20 14:04:56.648171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.648310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.648331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:44:49.097 [2024-11-20 14:04:56.648342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:44:49.097 [2024-11-20 14:04:56.648351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.656260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.656311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:44:49.097 [2024-11-20 14:04:56.656324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.799 ms 00:44:49.097 [2024-11-20 14:04:56.656340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.656447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.656467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:44:49.097 [2024-11-20 14:04:56.656477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:44:49.097 [2024-11-20 14:04:56.656485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.656558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.656580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:44:49.097 [2024-11-20 14:04:56.656592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:44:49.097 [2024-11-20 14:04:56.656601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.656634] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:44:49.097 [2024-11-20 14:04:56.662183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.662246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:44:49.097 [2024-11-20 14:04:56.662259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.571 ms 00:44:49.097 [2024-11-20 14:04:56.662274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.662326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.662337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:44:49.097 [2024-11-20 14:04:56.662347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:44:49.097 [2024-11-20 14:04:56.662356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.662445] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:44:49.097 [2024-11-20 14:04:56.662480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:44:49.097 [2024-11-20 14:04:56.662520] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:44:49.097 [2024-11-20 14:04:56.662547] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:44:49.097 [2024-11-20 14:04:56.662655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:44:49.097 [2024-11-20 14:04:56.662674] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:44:49.097 [2024-11-20 14:04:56.662687] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:44:49.097 [2024-11-20 14:04:56.662699] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:44:49.097 [2024-11-20 14:04:56.662710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:44:49.097 [2024-11-20 14:04:56.662735] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:44:49.097 [2024-11-20 14:04:56.662744] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:44:49.097 [2024-11-20 14:04:56.662753] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:44:49.097 [2024-11-20 14:04:56.662766] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:44:49.097 [2024-11-20 14:04:56.662776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.662786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:44:49.097 [2024-11-20 14:04:56.662795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:44:49.097 [2024-11-20 14:04:56.662804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.662891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.097 [2024-11-20 14:04:56.662908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:44:49.097 [2024-11-20 14:04:56.662917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:44:49.097 [2024-11-20 14:04:56.662926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.097 [2024-11-20 14:04:56.663049] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:44:49.097 [2024-11-20 14:04:56.663076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:44:49.097 [2024-11-20 14:04:56.663085] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:49.097 [2024-11-20 14:04:56.663095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:44:49.097 [2024-11-20 14:04:56.663114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:44:49.097 [2024-11-20 14:04:56.663130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:44:49.097 [2024-11-20 14:04:56.663139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:49.097 [2024-11-20 14:04:56.663155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:44:49.097 [2024-11-20 14:04:56.663163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:44:49.097 [2024-11-20 14:04:56.663171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:44:49.097 [2024-11-20 14:04:56.663179] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:44:49.097 [2024-11-20 14:04:56.663187] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:44:49.097 [2024-11-20 14:04:56.663208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:44:49.097 [2024-11-20 14:04:56.663225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:44:49.097 [2024-11-20 14:04:56.663233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663241] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:44:49.097 [2024-11-20 14:04:56.663250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:49.097 [2024-11-20 14:04:56.663267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:44:49.097 [2024-11-20 14:04:56.663275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:44:49.097 [2024-11-20 14:04:56.663282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:49.097 [2024-11-20 14:04:56.663290] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:44:49.098 [2024-11-20 14:04:56.663298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:49.098 [2024-11-20 14:04:56.663318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:44:49.098 [2024-11-20 14:04:56.663326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:44:49.098 [2024-11-20 14:04:56.663341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:44:49.098 [2024-11-20 14:04:56.663349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:49.098 [2024-11-20 14:04:56.663364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:44:49.098 [2024-11-20 14:04:56.663372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:44:49.098 [2024-11-20 14:04:56.663380] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:44:49.098 [2024-11-20 14:04:56.663387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:44:49.098 [2024-11-20 14:04:56.663395] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:44:49.098 [2024-11-20 14:04:56.663402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:44:49.098 [2024-11-20 14:04:56.663418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:44:49.098 [2024-11-20 14:04:56.663426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663434] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:44:49.098 [2024-11-20 14:04:56.663444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:44:49.098 [2024-11-20 14:04:56.663452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:44:49.098 [2024-11-20 14:04:56.663461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:44:49.098 [2024-11-20 14:04:56.663470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:44:49.098 [2024-11-20 14:04:56.663478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:44:49.098 [2024-11-20 14:04:56.663486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:44:49.098 [2024-11-20 14:04:56.663493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:44:49.098 [2024-11-20 14:04:56.663501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:44:49.098 [2024-11-20 14:04:56.663509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:44:49.098 [2024-11-20 14:04:56.663519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:44:49.098 [2024-11-20 14:04:56.663530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663540] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:44:49.098 [2024-11-20 14:04:56.663548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:44:49.098 [2024-11-20 14:04:56.663557] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:44:49.098 [2024-11-20 14:04:56.663565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:44:49.098 [2024-11-20 14:04:56.663574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:44:49.098 [2024-11-20 14:04:56.663584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:44:49.098 [2024-11-20 14:04:56.663592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:44:49.098 [2024-11-20 14:04:56.663601] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:44:49.098 [2024-11-20 14:04:56.663608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:44:49.098 [2024-11-20 14:04:56.663617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:44:49.098 [2024-11-20 14:04:56.663659] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:44:49.098 [2024-11-20 14:04:56.663673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663683] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:44:49.098 [2024-11-20 14:04:56.663692] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:44:49.098 [2024-11-20 14:04:56.663701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:44:49.098 [2024-11-20 14:04:56.663710] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:44:49.098 [2024-11-20 14:04:56.663735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.663745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:44:49.098 [2024-11-20 14:04:56.663754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:44:49.098 [2024-11-20 14:04:56.663764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.703712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.703824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:44:49.098 [2024-11-20 14:04:56.703841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.952 ms 00:44:49.098 [2024-11-20 14:04:56.703851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.703976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.703989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:44:49.098 [2024-11-20 14:04:56.703998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:44:49.098 [2024-11-20 14:04:56.704008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.765739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.765833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:44:49.098 [2024-11-20 14:04:56.765852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.757 ms 00:44:49.098 [2024-11-20 14:04:56.765861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.765939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.765951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:44:49.098 [2024-11-20 14:04:56.765968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:44:49.098 [2024-11-20 14:04:56.765977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.766535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.766563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:44:49.098 [2024-11-20 14:04:56.766575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.461 ms 00:44:49.098 [2024-11-20 14:04:56.766584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.766740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.766766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:44:49.098 [2024-11-20 14:04:56.766778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:44:49.098 [2024-11-20 14:04:56.766795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.788935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.788993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:44:49.098 [2024-11-20 14:04:56.789028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.154 ms 00:44:49.098 [2024-11-20 14:04:56.789037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.098 [2024-11-20 14:04:56.810575] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:44:49.098 [2024-11-20 14:04:56.810652] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:44:49.098 [2024-11-20 14:04:56.810670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.098 [2024-11-20 14:04:56.810681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:44:49.098 [2024-11-20 14:04:56.810695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.506 ms 00:44:49.098 [2024-11-20 14:04:56.810704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.844601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:56.844707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:44:49.358 [2024-11-20 14:04:56.844733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.855 ms 00:44:49.358 [2024-11-20 14:04:56.844744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.865799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:56.865910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:44:49.358 [2024-11-20 14:04:56.865926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.970 ms 00:44:49.358 [2024-11-20 14:04:56.865936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.886611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:56.886707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:44:49.358 [2024-11-20 14:04:56.886722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.601 ms 00:44:49.358 [2024-11-20 14:04:56.886740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.887696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:56.887753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:44:49.358 [2024-11-20 14:04:56.887765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:44:49.358 [2024-11-20 14:04:56.887780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.983732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:56.983837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:44:49.358 [2024-11-20 14:04:56.983868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 96.094 ms 00:44:49.358 [2024-11-20 14:04:56.983877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:56.999371] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:44:49.358 [2024-11-20 14:04:57.002914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.002968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:44:49.358 [2024-11-20 14:04:57.002998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.972 ms 00:44:49.358 [2024-11-20 14:04:57.003010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.003165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.003182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:44:49.358 [2024-11-20 14:04:57.003192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:44:49.358 [2024-11-20 14:04:57.003206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.004938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.004989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:44:49.358 [2024-11-20 14:04:57.005014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.687 ms 00:44:49.358 [2024-11-20 14:04:57.005024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.005086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.005099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:44:49.358 [2024-11-20 14:04:57.005108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:44:49.358 [2024-11-20 14:04:57.005117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.005163] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:44:49.358 [2024-11-20 14:04:57.005176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.005185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:44:49.358 [2024-11-20 14:04:57.005194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:44:49.358 [2024-11-20 14:04:57.005202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.048207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.048290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:44:49.358 [2024-11-20 14:04:57.048307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.063 ms 00:44:49.358 [2024-11-20 14:04:57.048329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.048473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:44:49.358 [2024-11-20 14:04:57.048489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:44:49.358 [2024-11-20 14:04:57.048500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:44:49.358 [2024-11-20 14:04:57.048508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:44:49.358 [2024-11-20 14:04:57.049903] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 428.326 ms, result 0 00:44:50.736  [2024-11-20T14:04:59.392Z] Copying: 928/1048576 [kB] (928 kBps) [2024-11-20T14:05:00.331Z] Copying: 5004/1048576 [kB] (4076 kBps) [2024-11-20T14:05:01.313Z] Copying: 33/1024 [MB] (28 MBps) [2024-11-20T14:05:02.249Z] Copying: 68/1024 [MB] (35 MBps) [2024-11-20T14:05:03.627Z] Copying: 102/1024 [MB] (33 MBps) [2024-11-20T14:05:04.562Z] Copying: 136/1024 [MB] (34 MBps) [2024-11-20T14:05:05.497Z] Copying: 171/1024 [MB] (34 MBps) [2024-11-20T14:05:06.434Z] Copying: 205/1024 [MB] (34 MBps) [2024-11-20T14:05:07.379Z] Copying: 240/1024 [MB] (34 MBps) [2024-11-20T14:05:08.316Z] Copying: 274/1024 [MB] (34 MBps) [2024-11-20T14:05:09.252Z] Copying: 309/1024 [MB] (34 MBps) [2024-11-20T14:05:10.627Z] Copying: 345/1024 [MB] (35 MBps) [2024-11-20T14:05:11.568Z] Copying: 381/1024 [MB] (36 MBps) [2024-11-20T14:05:12.505Z] Copying: 417/1024 [MB] (36 MBps) [2024-11-20T14:05:13.446Z] Copying: 454/1024 [MB] (37 MBps) [2024-11-20T14:05:14.381Z] Copying: 492/1024 [MB] (37 MBps) [2024-11-20T14:05:15.317Z] Copying: 528/1024 [MB] (36 MBps) [2024-11-20T14:05:16.252Z] Copying: 564/1024 [MB] (36 MBps) [2024-11-20T14:05:17.628Z] Copying: 602/1024 [MB] (37 MBps) [2024-11-20T14:05:18.563Z] Copying: 638/1024 [MB] (36 MBps) [2024-11-20T14:05:19.498Z] Copying: 674/1024 [MB] (36 MBps) [2024-11-20T14:05:20.431Z] Copying: 711/1024 [MB] (36 MBps) [2024-11-20T14:05:21.363Z] Copying: 747/1024 [MB] (36 MBps) [2024-11-20T14:05:22.300Z] Copying: 784/1024 [MB] (36 MBps) [2024-11-20T14:05:23.237Z] Copying: 820/1024 [MB] (35 MBps) [2024-11-20T14:05:24.616Z] Copying: 855/1024 [MB] (35 MBps) [2024-11-20T14:05:25.186Z] Copying: 892/1024 [MB] (37 MBps) [2024-11-20T14:05:26.567Z] Copying: 930/1024 [MB] (37 MBps) [2024-11-20T14:05:27.506Z] Copying: 965/1024 [MB] (34 MBps) [2024-11-20T14:05:28.076Z] Copying: 1000/1024 [MB] (35 MBps) [2024-11-20T14:05:28.076Z] Copying: 1024/1024 [MB] (average 33 MBps)[2024-11-20 14:05:27.850062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.357 [2024-11-20 14:05:27.850158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:20.357 [2024-11-20 14:05:27.850179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:45:20.357 [2024-11-20 14:05:27.850192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.850220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:20.358 [2024-11-20 14:05:27.855646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.855705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:20.358 [2024-11-20 14:05:27.855726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.414 ms 00:45:20.358 [2024-11-20 14:05:27.855736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.855971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.855993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:20.358 [2024-11-20 14:05:27.856009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:45:20.358 [2024-11-20 14:05:27.856018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.867748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.867832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:20.358 [2024-11-20 14:05:27.867849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.733 ms 00:45:20.358 [2024-11-20 14:05:27.867859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.873391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.873427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:20.358 [2024-11-20 14:05:27.873444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.508 ms 00:45:20.358 [2024-11-20 14:05:27.873453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.907069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.907102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:20.358 [2024-11-20 14:05:27.907113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.625 ms 00:45:20.358 [2024-11-20 14:05:27.907121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.928197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.928237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:20.358 [2024-11-20 14:05:27.928249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.083 ms 00:45:20.358 [2024-11-20 14:05:27.928257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.930136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.930175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:20.358 [2024-11-20 14:05:27.930185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.846 ms 00:45:20.358 [2024-11-20 14:05:27.930193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:27.965454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:27.965504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:20.358 [2024-11-20 14:05:27.965515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.308 ms 00:45:20.358 [2024-11-20 14:05:27.965523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:28.000197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:28.000237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:20.358 [2024-11-20 14:05:28.000263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.707 ms 00:45:20.358 [2024-11-20 14:05:28.000271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:28.036053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:28.036095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:20.358 [2024-11-20 14:05:28.036107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.813 ms 00:45:20.358 [2024-11-20 14:05:28.036115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:28.069723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.358 [2024-11-20 14:05:28.069785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:20.358 [2024-11-20 14:05:28.069797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.603 ms 00:45:20.358 [2024-11-20 14:05:28.069805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.358 [2024-11-20 14:05:28.069839] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:20.358 [2024-11-20 14:05:28.069854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:45:20.358 [2024-11-20 14:05:28.069864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:45:20.358 [2024-11-20 14:05:28.069872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.069994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:20.358 [2024-11-20 14:05:28.070233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:20.359 [2024-11-20 14:05:28.070639] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:20.359 [2024-11-20 14:05:28.070646] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10 00:45:20.359 [2024-11-20 14:05:28.070655] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:45:20.359 [2024-11-20 14:05:28.070662] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 146368 00:45:20.359 [2024-11-20 14:05:28.070670] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 144384 00:45:20.359 [2024-11-20 14:05:28.070682] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0137 00:45:20.359 [2024-11-20 14:05:28.070689] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:20.359 [2024-11-20 14:05:28.070697] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:20.359 [2024-11-20 14:05:28.070705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:20.359 [2024-11-20 14:05:28.070740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:20.359 [2024-11-20 14:05:28.070747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:20.359 [2024-11-20 14:05:28.070754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.359 [2024-11-20 14:05:28.070762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:20.359 [2024-11-20 14:05:28.070771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:45:20.359 [2024-11-20 14:05:28.070779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.619 [2024-11-20 14:05:28.090046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.619 [2024-11-20 14:05:28.090085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:20.619 [2024-11-20 14:05:28.090095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.275 ms 00:45:20.619 [2024-11-20 14:05:28.090102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.090641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:20.620 [2024-11-20 14:05:28.090658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:20.620 [2024-11-20 14:05:28.090665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:45:20.620 [2024-11-20 14:05:28.090672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.141279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.620 [2024-11-20 14:05:28.141323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:20.620 [2024-11-20 14:05:28.141335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.620 [2024-11-20 14:05:28.141343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.141398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.620 [2024-11-20 14:05:28.141408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:20.620 [2024-11-20 14:05:28.141417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.620 [2024-11-20 14:05:28.141425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.141510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.620 [2024-11-20 14:05:28.141523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:20.620 [2024-11-20 14:05:28.141532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.620 [2024-11-20 14:05:28.141539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.141555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.620 [2024-11-20 14:05:28.141564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:20.620 [2024-11-20 14:05:28.141572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.620 [2024-11-20 14:05:28.141580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.620 [2024-11-20 14:05:28.263244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.620 [2024-11-20 14:05:28.263305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:20.620 [2024-11-20 14:05:28.263317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.620 [2024-11-20 14:05:28.263324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.879 [2024-11-20 14:05:28.364257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:20.880 [2024-11-20 14:05:28.364329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:20.880 [2024-11-20 14:05:28.364460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:20.880 [2024-11-20 14:05:28.364530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:20.880 [2024-11-20 14:05:28.364657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:20.880 [2024-11-20 14:05:28.364737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:20.880 [2024-11-20 14:05:28.364804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.364858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:20.880 [2024-11-20 14:05:28.364867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:20.880 [2024-11-20 14:05:28.364876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:20.880 [2024-11-20 14:05:28.364884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:20.880 [2024-11-20 14:05:28.365006] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 515.910 ms, result 0 00:45:21.841 00:45:21.841 00:45:21.841 14:05:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:45:23.753 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:45:23.753 14:05:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:45:23.753 [2024-11-20 14:05:31.297484] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:45:23.753 [2024-11-20 14:05:31.297625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83371 ] 00:45:24.013 [2024-11-20 14:05:31.475466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:45:24.013 [2024-11-20 14:05:31.596103] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:45:24.272 [2024-11-20 14:05:31.956539] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:24.272 [2024-11-20 14:05:31.956611] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:45:24.532 [2024-11-20 14:05:32.117701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.117770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:45:24.532 [2024-11-20 14:05:32.117787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:24.532 [2024-11-20 14:05:32.117795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.117843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.117853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:24.532 [2024-11-20 14:05:32.117864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:45:24.532 [2024-11-20 14:05:32.117872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.117891] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:45:24.532 [2024-11-20 14:05:32.118839] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:45:24.532 [2024-11-20 14:05:32.118868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.118878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:24.532 [2024-11-20 14:05:32.118887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.984 ms 00:45:24.532 [2024-11-20 14:05:32.118895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.120389] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:45:24.532 [2024-11-20 14:05:32.138275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.138311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:45:24.532 [2024-11-20 14:05:32.138324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.923 ms 00:45:24.532 [2024-11-20 14:05:32.138332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.138397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.138407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:45:24.532 [2024-11-20 14:05:32.138415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:45:24.532 [2024-11-20 14:05:32.138423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.145079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.145111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:24.532 [2024-11-20 14:05:32.145121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.610 ms 00:45:24.532 [2024-11-20 14:05:32.145134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.145209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.145221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:24.532 [2024-11-20 14:05:32.145229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:45:24.532 [2024-11-20 14:05:32.145238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.145276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.145286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:45:24.532 [2024-11-20 14:05:32.145295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:24.532 [2024-11-20 14:05:32.145302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.145328] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:45:24.532 [2024-11-20 14:05:32.149946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.149976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:24.532 [2024-11-20 14:05:32.149986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.637 ms 00:45:24.532 [2024-11-20 14:05:32.149996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.150023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.532 [2024-11-20 14:05:32.150032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:45:24.532 [2024-11-20 14:05:32.150041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:45:24.532 [2024-11-20 14:05:32.150048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.532 [2024-11-20 14:05:32.150090] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:45:24.532 [2024-11-20 14:05:32.150112] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:45:24.533 [2024-11-20 14:05:32.150146] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:45:24.533 [2024-11-20 14:05:32.150166] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:45:24.533 [2024-11-20 14:05:32.150253] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:45:24.533 [2024-11-20 14:05:32.150272] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:45:24.533 [2024-11-20 14:05:32.150283] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:45:24.533 [2024-11-20 14:05:32.150294] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150303] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150314] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:45:24.533 [2024-11-20 14:05:32.150322] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:45:24.533 [2024-11-20 14:05:32.150331] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:45:24.533 [2024-11-20 14:05:32.150341] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:45:24.533 [2024-11-20 14:05:32.150350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.533 [2024-11-20 14:05:32.150358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:45:24.533 [2024-11-20 14:05:32.150367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:45:24.533 [2024-11-20 14:05:32.150374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.533 [2024-11-20 14:05:32.150441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.533 [2024-11-20 14:05:32.150450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:45:24.533 [2024-11-20 14:05:32.150458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:45:24.533 [2024-11-20 14:05:32.150465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.533 [2024-11-20 14:05:32.150560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:45:24.533 [2024-11-20 14:05:32.150581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:45:24.533 [2024-11-20 14:05:32.150590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:45:24.533 [2024-11-20 14:05:32.150615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:45:24.533 [2024-11-20 14:05:32.150640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:24.533 [2024-11-20 14:05:32.150653] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:45:24.533 [2024-11-20 14:05:32.150660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:45:24.533 [2024-11-20 14:05:32.150667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:45:24.533 [2024-11-20 14:05:32.150674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:45:24.533 [2024-11-20 14:05:32.150681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:45:24.533 [2024-11-20 14:05:32.150697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:45:24.533 [2024-11-20 14:05:32.150712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:45:24.533 [2024-11-20 14:05:32.150751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:45:24.533 [2024-11-20 14:05:32.150771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:45:24.533 [2024-11-20 14:05:32.150792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:45:24.533 [2024-11-20 14:05:32.150812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:45:24.533 [2024-11-20 14:05:32.150834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:24.533 [2024-11-20 14:05:32.150847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:45:24.533 [2024-11-20 14:05:32.150854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:45:24.533 [2024-11-20 14:05:32.150859] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:45:24.533 [2024-11-20 14:05:32.150866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:45:24.533 [2024-11-20 14:05:32.150872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:45:24.533 [2024-11-20 14:05:32.150878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:45:24.533 [2024-11-20 14:05:32.150893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:45:24.533 [2024-11-20 14:05:32.150899] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150906] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:45:24.533 [2024-11-20 14:05:32.150913] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:45:24.533 [2024-11-20 14:05:32.150921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:45:24.533 [2024-11-20 14:05:32.150935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:45:24.533 [2024-11-20 14:05:32.150942] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:45:24.533 [2024-11-20 14:05:32.150949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:45:24.533 [2024-11-20 14:05:32.150955] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:45:24.533 [2024-11-20 14:05:32.150962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:45:24.533 [2024-11-20 14:05:32.150969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:45:24.533 [2024-11-20 14:05:32.150977] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:45:24.533 [2024-11-20 14:05:32.150987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.150996] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:45:24.533 [2024-11-20 14:05:32.151004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:45:24.533 [2024-11-20 14:05:32.151011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:45:24.533 [2024-11-20 14:05:32.151020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:45:24.533 [2024-11-20 14:05:32.151028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:45:24.533 [2024-11-20 14:05:32.151046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:45:24.533 [2024-11-20 14:05:32.151052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:45:24.533 [2024-11-20 14:05:32.151075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:45:24.533 [2024-11-20 14:05:32.151083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:45:24.533 [2024-11-20 14:05:32.151090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151098] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151118] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:45:24.533 [2024-11-20 14:05:32.151125] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:45:24.533 [2024-11-20 14:05:32.151138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151147] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:45:24.533 [2024-11-20 14:05:32.151155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:45:24.533 [2024-11-20 14:05:32.151162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:45:24.533 [2024-11-20 14:05:32.151170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:45:24.533 [2024-11-20 14:05:32.151179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.533 [2024-11-20 14:05:32.151188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:45:24.533 [2024-11-20 14:05:32.151196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.674 ms 00:45:24.534 [2024-11-20 14:05:32.151203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.534 [2024-11-20 14:05:32.189687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.534 [2024-11-20 14:05:32.189745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:24.534 [2024-11-20 14:05:32.189759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.506 ms 00:45:24.534 [2024-11-20 14:05:32.189780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.534 [2024-11-20 14:05:32.189872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.534 [2024-11-20 14:05:32.189882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:45:24.534 [2024-11-20 14:05:32.189891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:45:24.534 [2024-11-20 14:05:32.189898] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.794 [2024-11-20 14:05:32.255554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.794 [2024-11-20 14:05:32.255605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:24.794 [2024-11-20 14:05:32.255619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.712 ms 00:45:24.794 [2024-11-20 14:05:32.255628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.794 [2024-11-20 14:05:32.255684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.794 [2024-11-20 14:05:32.255693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:24.794 [2024-11-20 14:05:32.255705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:45:24.794 [2024-11-20 14:05:32.255714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.794 [2024-11-20 14:05:32.256234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.256258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:24.795 [2024-11-20 14:05:32.256267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:45:24.795 [2024-11-20 14:05:32.256275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.256389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.256412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:24.795 [2024-11-20 14:05:32.256421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:45:24.795 [2024-11-20 14:05:32.256435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.275513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.275554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:24.795 [2024-11-20 14:05:32.275569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.091 ms 00:45:24.795 [2024-11-20 14:05:32.275576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.294250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:45:24.795 [2024-11-20 14:05:32.294288] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:45:24.795 [2024-11-20 14:05:32.294301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.294310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:45:24.795 [2024-11-20 14:05:32.294320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.640 ms 00:45:24.795 [2024-11-20 14:05:32.294328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.323637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.323676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:45:24.795 [2024-11-20 14:05:32.323687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.325 ms 00:45:24.795 [2024-11-20 14:05:32.323695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.341571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.341607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:45:24.795 [2024-11-20 14:05:32.341619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.864 ms 00:45:24.795 [2024-11-20 14:05:32.341626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.359714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.359752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:45:24.795 [2024-11-20 14:05:32.359762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.087 ms 00:45:24.795 [2024-11-20 14:05:32.359770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.360517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.360548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:45:24.795 [2024-11-20 14:05:32.360558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.641 ms 00:45:24.795 [2024-11-20 14:05:32.360569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.445872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.445936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:45:24.795 [2024-11-20 14:05:32.445956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 85.445 ms 00:45:24.795 [2024-11-20 14:05:32.445964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.457354] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:45:24.795 [2024-11-20 14:05:32.460496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.460531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:45:24.795 [2024-11-20 14:05:32.460545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.483 ms 00:45:24.795 [2024-11-20 14:05:32.460554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.460660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.460672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:45:24.795 [2024-11-20 14:05:32.460682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:45:24.795 [2024-11-20 14:05:32.460694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.461501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.461524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:45:24.795 [2024-11-20 14:05:32.461535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.746 ms 00:45:24.795 [2024-11-20 14:05:32.461543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.461569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.461578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:45:24.795 [2024-11-20 14:05:32.461587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:24.795 [2024-11-20 14:05:32.461595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.461632] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:45:24.795 [2024-11-20 14:05:32.461644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.461653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:45:24.795 [2024-11-20 14:05:32.461661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:45:24.795 [2024-11-20 14:05:32.461670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.499407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.499448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:45:24.795 [2024-11-20 14:05:32.499461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.788 ms 00:45:24.795 [2024-11-20 14:05:32.499476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.499555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:24.795 [2024-11-20 14:05:32.499566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:45:24.795 [2024-11-20 14:05:32.499575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:45:24.795 [2024-11-20 14:05:32.499583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:24.795 [2024-11-20 14:05:32.500749] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 383.270 ms, result 0 00:45:26.176  [2024-11-20T14:05:34.832Z] Copying: 31/1024 [MB] (31 MBps) [2024-11-20T14:05:35.772Z] Copying: 63/1024 [MB] (31 MBps) [2024-11-20T14:05:36.712Z] Copying: 94/1024 [MB] (31 MBps) [2024-11-20T14:05:38.094Z] Copying: 125/1024 [MB] (31 MBps) [2024-11-20T14:05:38.662Z] Copying: 156/1024 [MB] (30 MBps) [2024-11-20T14:05:40.043Z] Copying: 184/1024 [MB] (27 MBps) [2024-11-20T14:05:40.983Z] Copying: 213/1024 [MB] (29 MBps) [2024-11-20T14:05:41.920Z] Copying: 242/1024 [MB] (29 MBps) [2024-11-20T14:05:42.859Z] Copying: 274/1024 [MB] (31 MBps) [2024-11-20T14:05:43.798Z] Copying: 306/1024 [MB] (32 MBps) [2024-11-20T14:05:44.764Z] Copying: 339/1024 [MB] (32 MBps) [2024-11-20T14:05:45.701Z] Copying: 373/1024 [MB] (34 MBps) [2024-11-20T14:05:47.080Z] Copying: 407/1024 [MB] (34 MBps) [2024-11-20T14:05:47.644Z] Copying: 442/1024 [MB] (34 MBps) [2024-11-20T14:05:49.019Z] Copying: 475/1024 [MB] (33 MBps) [2024-11-20T14:05:49.954Z] Copying: 510/1024 [MB] (34 MBps) [2024-11-20T14:05:50.944Z] Copying: 544/1024 [MB] (34 MBps) [2024-11-20T14:05:51.889Z] Copying: 579/1024 [MB] (35 MBps) [2024-11-20T14:05:52.824Z] Copying: 614/1024 [MB] (34 MBps) [2024-11-20T14:05:53.759Z] Copying: 649/1024 [MB] (34 MBps) [2024-11-20T14:05:54.695Z] Copying: 684/1024 [MB] (35 MBps) [2024-11-20T14:05:55.630Z] Copying: 718/1024 [MB] (34 MBps) [2024-11-20T14:05:57.007Z] Copying: 753/1024 [MB] (35 MBps) [2024-11-20T14:05:57.945Z] Copying: 789/1024 [MB] (35 MBps) [2024-11-20T14:05:58.883Z] Copying: 822/1024 [MB] (33 MBps) [2024-11-20T14:05:59.819Z] Copying: 853/1024 [MB] (30 MBps) [2024-11-20T14:06:00.752Z] Copying: 885/1024 [MB] (32 MBps) [2024-11-20T14:06:01.683Z] Copying: 916/1024 [MB] (31 MBps) [2024-11-20T14:06:02.623Z] Copying: 944/1024 [MB] (28 MBps) [2024-11-20T14:06:03.618Z] Copying: 972/1024 [MB] (28 MBps) [2024-11-20T14:06:04.552Z] Copying: 1001/1024 [MB] (28 MBps) [2024-11-20T14:06:04.552Z] Copying: 1024/1024 [MB] (average 32 MBps)[2024-11-20 14:06:04.492295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:56.833 [2024-11-20 14:06:04.492406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:45:56.833 [2024-11-20 14:06:04.492436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:45:56.833 [2024-11-20 14:06:04.492455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:56.833 [2024-11-20 14:06:04.492500] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:45:56.833 [2024-11-20 14:06:04.501542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:56.833 [2024-11-20 14:06:04.501625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:45:56.833 [2024-11-20 14:06:04.501664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.021 ms 00:45:56.833 [2024-11-20 14:06:04.501681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:56.833 [2024-11-20 14:06:04.502154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:56.833 [2024-11-20 14:06:04.502194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:45:56.833 [2024-11-20 14:06:04.502217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.376 ms 00:45:56.833 [2024-11-20 14:06:04.502234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:56.833 [2024-11-20 14:06:04.507135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:56.833 [2024-11-20 14:06:04.507163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:45:56.833 [2024-11-20 14:06:04.507195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.882 ms 00:45:56.833 [2024-11-20 14:06:04.507208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:56.833 [2024-11-20 14:06:04.513494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:56.833 [2024-11-20 14:06:04.513557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:45:56.833 [2024-11-20 14:06:04.513575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:45:56.833 [2024-11-20 14:06:04.513587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.564893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.565012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:45:57.092 [2024-11-20 14:06:04.565034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.252 ms 00:45:57.092 [2024-11-20 14:06:04.565045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.595410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.595528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:45:57.092 [2024-11-20 14:06:04.595550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.297 ms 00:45:57.092 [2024-11-20 14:06:04.595563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.597923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.598000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:45:57.092 [2024-11-20 14:06:04.598017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.236 ms 00:45:57.092 [2024-11-20 14:06:04.598029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.648930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.649038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:45:57.092 [2024-11-20 14:06:04.649058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.971 ms 00:45:57.092 [2024-11-20 14:06:04.649070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.699036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.699173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:45:57.092 [2024-11-20 14:06:04.699195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.941 ms 00:45:57.092 [2024-11-20 14:06:04.699206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.748298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.748410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:45:57.092 [2024-11-20 14:06:04.748430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.064 ms 00:45:57.092 [2024-11-20 14:06:04.748442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.797476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.092 [2024-11-20 14:06:04.797589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:45:57.092 [2024-11-20 14:06:04.797612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.914 ms 00:45:57.092 [2024-11-20 14:06:04.797625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.092 [2024-11-20 14:06:04.797750] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:45:57.092 [2024-11-20 14:06:04.797784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:45:57.092 [2024-11-20 14:06:04.797826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:45:57.092 [2024-11-20 14:06:04.797839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.797989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.798000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.798011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:45:57.092 [2024-11-20 14:06:04.798022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:45:57.093 [2024-11-20 14:06:04.798995] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:45:57.093 [2024-11-20 14:06:04.799018] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: a7a8e5bc-d38f-4e24-83a5-ef0fc97d3e10 00:45:57.093 [2024-11-20 14:06:04.799030] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:45:57.093 [2024-11-20 14:06:04.799041] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:45:57.093 [2024-11-20 14:06:04.799052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:45:57.093 [2024-11-20 14:06:04.799064] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:45:57.093 [2024-11-20 14:06:04.799075] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:45:57.093 [2024-11-20 14:06:04.799087] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:45:57.093 [2024-11-20 14:06:04.799127] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:45:57.094 [2024-11-20 14:06:04.799138] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:45:57.094 [2024-11-20 14:06:04.799147] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:45:57.094 [2024-11-20 14:06:04.799159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.094 [2024-11-20 14:06:04.799172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:45:57.094 [2024-11-20 14:06:04.799184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:45:57.094 [2024-11-20 14:06:04.799196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.825007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.352 [2024-11-20 14:06:04.825108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:45:57.352 [2024-11-20 14:06:04.825130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.773 ms 00:45:57.352 [2024-11-20 14:06:04.825141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.825928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:45:57.352 [2024-11-20 14:06:04.825956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:45:57.352 [2024-11-20 14:06:04.825988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:45:57.352 [2024-11-20 14:06:04.825999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.890829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.352 [2024-11-20 14:06:04.890928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:45:57.352 [2024-11-20 14:06:04.890947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.352 [2024-11-20 14:06:04.890960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.891066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.352 [2024-11-20 14:06:04.891103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:45:57.352 [2024-11-20 14:06:04.891123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.352 [2024-11-20 14:06:04.891134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.891246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.352 [2024-11-20 14:06:04.891267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:45:57.352 [2024-11-20 14:06:04.891280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.352 [2024-11-20 14:06:04.891292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.352 [2024-11-20 14:06:04.891315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.352 [2024-11-20 14:06:04.891327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:45:57.352 [2024-11-20 14:06:04.891339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.352 [2024-11-20 14:06:04.891356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.353 [2024-11-20 14:06:05.049658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.353 [2024-11-20 14:06:05.049776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:45:57.353 [2024-11-20 14:06:05.049798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.353 [2024-11-20 14:06:05.049810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.184426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.184533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:45:57.610 [2024-11-20 14:06:05.184575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.184587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.184761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.184782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:45:57.610 [2024-11-20 14:06:05.184796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.184807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.184868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.184888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:45:57.610 [2024-11-20 14:06:05.184900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.184911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.185077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.185102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:45:57.610 [2024-11-20 14:06:05.185114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.185126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.185179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.185200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:45:57.610 [2024-11-20 14:06:05.185212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.185225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.185282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.185295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:45:57.610 [2024-11-20 14:06:05.185306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.185317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.185373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:45:57.610 [2024-11-20 14:06:05.185392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:45:57.610 [2024-11-20 14:06:05.185403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:45:57.610 [2024-11-20 14:06:05.185415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:45:57.610 [2024-11-20 14:06:05.185577] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 694.606 ms, result 0 00:45:58.984 00:45:58.984 00:45:58.984 14:06:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:46:00.884 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:46:00.884 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:46:00.884 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:46:00.884 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:46:00.884 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:46:01.142 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:46:01.400 Process with pid 81808 is not found 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81808 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81808 ']' 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81808 00:46:01.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81808) - No such process 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81808 is not found' 00:46:01.400 14:06:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:46:01.657 Remove shared memory files 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:46:01.657 00:46:01.657 real 3m8.545s 00:46:01.657 user 3m33.111s 00:46:01.657 sys 0m31.011s 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:46:01.657 14:06:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:01.657 ************************************ 00:46:01.657 END TEST ftl_dirty_shutdown 00:46:01.657 ************************************ 00:46:01.657 14:06:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:46:01.657 14:06:09 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:46:01.657 14:06:09 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:46:01.657 14:06:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:46:01.657 ************************************ 00:46:01.657 START TEST ftl_upgrade_shutdown 00:46:01.657 ************************************ 00:46:01.657 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:46:01.657 * Looking for test storage... 00:46:01.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:46:01.657 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:46:01.657 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:46:01.657 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:46:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:01.918 --rc genhtml_branch_coverage=1 00:46:01.918 --rc genhtml_function_coverage=1 00:46:01.918 --rc genhtml_legend=1 00:46:01.918 --rc geninfo_all_blocks=1 00:46:01.918 --rc geninfo_unexecuted_blocks=1 00:46:01.918 00:46:01.918 ' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:46:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:01.918 --rc genhtml_branch_coverage=1 00:46:01.918 --rc genhtml_function_coverage=1 00:46:01.918 --rc genhtml_legend=1 00:46:01.918 --rc geninfo_all_blocks=1 00:46:01.918 --rc geninfo_unexecuted_blocks=1 00:46:01.918 00:46:01.918 ' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:46:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:01.918 --rc genhtml_branch_coverage=1 00:46:01.918 --rc genhtml_function_coverage=1 00:46:01.918 --rc genhtml_legend=1 00:46:01.918 --rc geninfo_all_blocks=1 00:46:01.918 --rc geninfo_unexecuted_blocks=1 00:46:01.918 00:46:01.918 ' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:46:01.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:46:01.918 --rc genhtml_branch_coverage=1 00:46:01.918 --rc genhtml_function_coverage=1 00:46:01.918 --rc genhtml_legend=1 00:46:01.918 --rc geninfo_all_blocks=1 00:46:01.918 --rc geninfo_unexecuted_blocks=1 00:46:01.918 00:46:01.918 ' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=83817 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 83817 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83817 ']' 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:01.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:01.918 14:06:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:01.918 [2024-11-20 14:06:09.578456] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:01.918 [2024-11-20 14:06:09.578589] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83817 ] 00:46:02.177 [2024-11-20 14:06:09.757186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:02.435 [2024-11-20 14:06:09.903886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:46:03.369 14:06:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:46:03.627 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:46:03.628 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:46:03.886 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:03.887 { 00:46:03.887 "name": "basen1", 00:46:03.887 "aliases": [ 00:46:03.887 "1b96f0be-1a6b-48e3-bf23-63ffc9d9e1e5" 00:46:03.887 ], 00:46:03.887 "product_name": "NVMe disk", 00:46:03.887 "block_size": 4096, 00:46:03.887 "num_blocks": 1310720, 00:46:03.887 "uuid": "1b96f0be-1a6b-48e3-bf23-63ffc9d9e1e5", 00:46:03.887 "numa_id": -1, 00:46:03.887 "assigned_rate_limits": { 00:46:03.887 "rw_ios_per_sec": 0, 00:46:03.887 "rw_mbytes_per_sec": 0, 00:46:03.887 "r_mbytes_per_sec": 0, 00:46:03.887 "w_mbytes_per_sec": 0 00:46:03.887 }, 00:46:03.887 "claimed": true, 00:46:03.887 "claim_type": "read_many_write_one", 00:46:03.887 "zoned": false, 00:46:03.887 "supported_io_types": { 00:46:03.887 "read": true, 00:46:03.887 "write": true, 00:46:03.887 "unmap": true, 00:46:03.887 "flush": true, 00:46:03.887 "reset": true, 00:46:03.887 "nvme_admin": true, 00:46:03.887 "nvme_io": true, 00:46:03.887 "nvme_io_md": false, 00:46:03.887 "write_zeroes": true, 00:46:03.887 "zcopy": false, 00:46:03.887 "get_zone_info": false, 00:46:03.887 "zone_management": false, 00:46:03.887 "zone_append": false, 00:46:03.887 "compare": true, 00:46:03.887 "compare_and_write": false, 00:46:03.887 "abort": true, 00:46:03.887 "seek_hole": false, 00:46:03.887 "seek_data": false, 00:46:03.887 "copy": true, 00:46:03.887 "nvme_iov_md": false 00:46:03.887 }, 00:46:03.887 "driver_specific": { 00:46:03.887 "nvme": [ 00:46:03.887 { 00:46:03.887 "pci_address": "0000:00:11.0", 00:46:03.887 "trid": { 00:46:03.887 "trtype": "PCIe", 00:46:03.887 "traddr": "0000:00:11.0" 00:46:03.887 }, 00:46:03.887 "ctrlr_data": { 00:46:03.887 "cntlid": 0, 00:46:03.887 "vendor_id": "0x1b36", 00:46:03.887 "model_number": "QEMU NVMe Ctrl", 00:46:03.887 "serial_number": "12341", 00:46:03.887 "firmware_revision": "8.0.0", 00:46:03.887 "subnqn": "nqn.2019-08.org.qemu:12341", 00:46:03.887 "oacs": { 00:46:03.887 "security": 0, 00:46:03.887 "format": 1, 00:46:03.887 "firmware": 0, 00:46:03.887 "ns_manage": 1 00:46:03.887 }, 00:46:03.887 "multi_ctrlr": false, 00:46:03.887 "ana_reporting": false 00:46:03.887 }, 00:46:03.887 "vs": { 00:46:03.887 "nvme_version": "1.4" 00:46:03.887 }, 00:46:03.887 "ns_data": { 00:46:03.887 "id": 1, 00:46:03.887 "can_share": false 00:46:03.887 } 00:46:03.887 } 00:46:03.887 ], 00:46:03.887 "mp_policy": "active_passive" 00:46:03.887 } 00:46:03.887 } 00:46:03.887 ]' 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:46:03.887 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:46:04.145 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:46:04.145 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:46:04.145 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=019b54ac-8804-4822-901c-1106fd630e39 00:46:04.145 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:46:04.145 14:06:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 019b54ac-8804-4822-901c-1106fd630e39 00:46:04.403 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:46:04.661 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=fb5d9f83-2fe4-4eed-b903-3a6b98068e83 00:46:04.662 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u fb5d9f83-2fe4-4eed-b903-3a6b98068e83 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5fb90783-3b4a-4131-a6af-f25aac9f0336 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5fb90783-3b4a-4131-a6af-f25aac9f0336 ]] 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5fb90783-3b4a-4131-a6af-f25aac9f0336 5120 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5fb90783-3b4a-4131-a6af-f25aac9f0336 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5fb90783-3b4a-4131-a6af-f25aac9f0336 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5fb90783-3b4a-4131-a6af-f25aac9f0336 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:46:04.920 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5fb90783-3b4a-4131-a6af-f25aac9f0336 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:46:05.179 { 00:46:05.179 "name": "5fb90783-3b4a-4131-a6af-f25aac9f0336", 00:46:05.179 "aliases": [ 00:46:05.179 "lvs/basen1p0" 00:46:05.179 ], 00:46:05.179 "product_name": "Logical Volume", 00:46:05.179 "block_size": 4096, 00:46:05.179 "num_blocks": 5242880, 00:46:05.179 "uuid": "5fb90783-3b4a-4131-a6af-f25aac9f0336", 00:46:05.179 "assigned_rate_limits": { 00:46:05.179 "rw_ios_per_sec": 0, 00:46:05.179 "rw_mbytes_per_sec": 0, 00:46:05.179 "r_mbytes_per_sec": 0, 00:46:05.179 "w_mbytes_per_sec": 0 00:46:05.179 }, 00:46:05.179 "claimed": false, 00:46:05.179 "zoned": false, 00:46:05.179 "supported_io_types": { 00:46:05.179 "read": true, 00:46:05.179 "write": true, 00:46:05.179 "unmap": true, 00:46:05.179 "flush": false, 00:46:05.179 "reset": true, 00:46:05.179 "nvme_admin": false, 00:46:05.179 "nvme_io": false, 00:46:05.179 "nvme_io_md": false, 00:46:05.179 "write_zeroes": true, 00:46:05.179 "zcopy": false, 00:46:05.179 "get_zone_info": false, 00:46:05.179 "zone_management": false, 00:46:05.179 "zone_append": false, 00:46:05.179 "compare": false, 00:46:05.179 "compare_and_write": false, 00:46:05.179 "abort": false, 00:46:05.179 "seek_hole": true, 00:46:05.179 "seek_data": true, 00:46:05.179 "copy": false, 00:46:05.179 "nvme_iov_md": false 00:46:05.179 }, 00:46:05.179 "driver_specific": { 00:46:05.179 "lvol": { 00:46:05.179 "lvol_store_uuid": "fb5d9f83-2fe4-4eed-b903-3a6b98068e83", 00:46:05.179 "base_bdev": "basen1", 00:46:05.179 "thin_provision": true, 00:46:05.179 "num_allocated_clusters": 0, 00:46:05.179 "snapshot": false, 00:46:05.179 "clone": false, 00:46:05.179 "esnap_clone": false 00:46:05.179 } 00:46:05.179 } 00:46:05.179 } 00:46:05.179 ]' 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:46:05.179 14:06:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:46:05.438 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:46:05.438 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:46:05.438 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:46:05.698 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:46:05.698 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:46:05.698 14:06:13 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5fb90783-3b4a-4131-a6af-f25aac9f0336 -c cachen1p0 --l2p_dram_limit 2 00:46:05.957 [2024-11-20 14:06:13.535674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.957 [2024-11-20 14:06:13.535787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:46:05.957 [2024-11-20 14:06:13.535807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:46:05.957 [2024-11-20 14:06:13.535817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.957 [2024-11-20 14:06:13.535908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.957 [2024-11-20 14:06:13.535919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:46:05.957 [2024-11-20 14:06:13.535931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.063 ms 00:46:05.957 [2024-11-20 14:06:13.535939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.957 [2024-11-20 14:06:13.535962] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:46:05.957 [2024-11-20 14:06:13.537167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:46:05.957 [2024-11-20 14:06:13.537206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.957 [2024-11-20 14:06:13.537215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:46:05.957 [2024-11-20 14:06:13.537227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.247 ms 00:46:05.957 [2024-11-20 14:06:13.537235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.957 [2024-11-20 14:06:13.537277] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 5f8eea4c-b91c-4833-a9f5-b93bc1c846c0 00:46:05.957 [2024-11-20 14:06:13.539806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.957 [2024-11-20 14:06:13.539852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:46:05.958 [2024-11-20 14:06:13.539863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:46:05.958 [2024-11-20 14:06:13.539874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.554199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.554253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:46:05.958 [2024-11-20 14:06:13.554264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.285 ms 00:46:05.958 [2024-11-20 14:06:13.554275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.554400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.554419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:46:05.958 [2024-11-20 14:06:13.554429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.034 ms 00:46:05.958 [2024-11-20 14:06:13.554441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.554512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.554525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:46:05.958 [2024-11-20 14:06:13.554533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:46:05.958 [2024-11-20 14:06:13.554549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.554591] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:46:05.958 [2024-11-20 14:06:13.560623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.560660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:46:05.958 [2024-11-20 14:06:13.560688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.048 ms 00:46:05.958 [2024-11-20 14:06:13.560697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.560728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.560751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:46:05.958 [2024-11-20 14:06:13.560762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:46:05.958 [2024-11-20 14:06:13.560770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.560805] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:46:05.958 [2024-11-20 14:06:13.560948] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:46:05.958 [2024-11-20 14:06:13.560967] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:46:05.958 [2024-11-20 14:06:13.560979] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:46:05.958 [2024-11-20 14:06:13.560991] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561000] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561011] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:46:05.958 [2024-11-20 14:06:13.561019] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:46:05.958 [2024-11-20 14:06:13.561032] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:46:05.958 [2024-11-20 14:06:13.561040] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:46:05.958 [2024-11-20 14:06:13.561050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.561059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:46:05.958 [2024-11-20 14:06:13.561071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.249 ms 00:46:05.958 [2024-11-20 14:06:13.561079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.561154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.958 [2024-11-20 14:06:13.561164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:46:05.958 [2024-11-20 14:06:13.561175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:46:05.958 [2024-11-20 14:06:13.561198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.958 [2024-11-20 14:06:13.561302] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:46:05.958 [2024-11-20 14:06:13.561314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:46:05.958 [2024-11-20 14:06:13.561326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:46:05.958 [2024-11-20 14:06:13.561352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:46:05.958 [2024-11-20 14:06:13.561371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:46:05.958 [2024-11-20 14:06:13.561380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:46:05.958 [2024-11-20 14:06:13.561387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:46:05.958 [2024-11-20 14:06:13.561405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:46:05.958 [2024-11-20 14:06:13.561414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:46:05.958 [2024-11-20 14:06:13.561431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:46:05.958 [2024-11-20 14:06:13.561437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:46:05.958 [2024-11-20 14:06:13.561457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:46:05.958 [2024-11-20 14:06:13.561465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:46:05.958 [2024-11-20 14:06:13.561481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561488] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:46:05.958 [2024-11-20 14:06:13.561503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:46:05.958 [2024-11-20 14:06:13.561528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:46:05.958 [2024-11-20 14:06:13.561550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561566] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:46:05.958 [2024-11-20 14:06:13.561577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:46:05.958 [2024-11-20 14:06:13.561598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:46:05.958 [2024-11-20 14:06:13.561623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:46:05.958 [2024-11-20 14:06:13.561645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:46:05.958 [2024-11-20 14:06:13.561655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561663] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:46:05.958 [2024-11-20 14:06:13.561673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:46:05.958 [2024-11-20 14:06:13.561680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:05.958 [2024-11-20 14:06:13.561698] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:46:05.958 [2024-11-20 14:06:13.561710] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:46:05.958 [2024-11-20 14:06:13.561729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:46:05.958 [2024-11-20 14:06:13.561739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:46:05.958 [2024-11-20 14:06:13.561746] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:46:05.958 [2024-11-20 14:06:13.561756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:46:05.958 [2024-11-20 14:06:13.561768] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:46:05.958 [2024-11-20 14:06:13.561781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:05.958 [2024-11-20 14:06:13.561792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:46:05.958 [2024-11-20 14:06:13.561802] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:46:05.958 [2024-11-20 14:06:13.561809] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:46:05.958 [2024-11-20 14:06:13.561818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:46:05.958 [2024-11-20 14:06:13.561825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:46:05.958 [2024-11-20 14:06:13.561835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:46:05.958 [2024-11-20 14:06:13.561842] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:46:05.959 [2024-11-20 14:06:13.561851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561870] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:46:05.959 [2024-11-20 14:06:13.561910] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:46:05.959 [2024-11-20 14:06:13.561922] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561930] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:05.959 [2024-11-20 14:06:13.561940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:46:05.959 [2024-11-20 14:06:13.561946] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:46:05.959 [2024-11-20 14:06:13.561955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:46:05.959 [2024-11-20 14:06:13.561964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:05.959 [2024-11-20 14:06:13.561974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:46:05.959 [2024-11-20 14:06:13.561982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.724 ms 00:46:05.959 [2024-11-20 14:06:13.561993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:05.959 [2024-11-20 14:06:13.562036] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:46:05.959 [2024-11-20 14:06:13.562051] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:46:09.253 [2024-11-20 14:06:16.829933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.830018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:46:09.253 [2024-11-20 14:06:16.830035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3274.197 ms 00:46:09.253 [2024-11-20 14:06:16.830048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.881805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.881881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:46:09.253 [2024-11-20 14:06:16.881898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 51.445 ms 00:46:09.253 [2024-11-20 14:06:16.881910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.882063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.882079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:46:09.253 [2024-11-20 14:06:16.882088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:46:09.253 [2024-11-20 14:06:16.882107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.935868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.935944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:46:09.253 [2024-11-20 14:06:16.935959] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.824 ms 00:46:09.253 [2024-11-20 14:06:16.935970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.936035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.936053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:46:09.253 [2024-11-20 14:06:16.936064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:46:09.253 [2024-11-20 14:06:16.936075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.936963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.936988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:46:09.253 [2024-11-20 14:06:16.936998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.795 ms 00:46:09.253 [2024-11-20 14:06:16.937009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.937071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.937083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:46:09.253 [2024-11-20 14:06:16.937094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:46:09.253 [2024-11-20 14:06:16.937108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.253 [2024-11-20 14:06:16.962536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.253 [2024-11-20 14:06:16.962598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:46:09.253 [2024-11-20 14:06:16.962612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.456 ms 00:46:09.253 [2024-11-20 14:06:16.962623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:16.989391] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:46:09.513 [2024-11-20 14:06:16.991208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:16.991233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:46:09.513 [2024-11-20 14:06:16.991248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.493 ms 00:46:09.513 [2024-11-20 14:06:16.991257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.027410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.027477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:46:09.513 [2024-11-20 14:06:17.027497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.165 ms 00:46:09.513 [2024-11-20 14:06:17.027507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.027623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.027637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:46:09.513 [2024-11-20 14:06:17.027653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:46:09.513 [2024-11-20 14:06:17.027661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.067732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.067791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:46:09.513 [2024-11-20 14:06:17.067825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.088 ms 00:46:09.513 [2024-11-20 14:06:17.067840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.107084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.107146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:46:09.513 [2024-11-20 14:06:17.107164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.253 ms 00:46:09.513 [2024-11-20 14:06:17.107173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.108057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.108088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:46:09.513 [2024-11-20 14:06:17.108102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.829 ms 00:46:09.513 [2024-11-20 14:06:17.108115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.513 [2024-11-20 14:06:17.225775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.513 [2024-11-20 14:06:17.225841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:46:09.513 [2024-11-20 14:06:17.225881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 117.811 ms 00:46:09.513 [2024-11-20 14:06:17.225890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.266138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.773 [2024-11-20 14:06:17.266210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:46:09.773 [2024-11-20 14:06:17.266244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.202 ms 00:46:09.773 [2024-11-20 14:06:17.266254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.305607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.773 [2024-11-20 14:06:17.305677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:46:09.773 [2024-11-20 14:06:17.305693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 39.374 ms 00:46:09.773 [2024-11-20 14:06:17.305701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.349727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.773 [2024-11-20 14:06:17.349805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:46:09.773 [2024-11-20 14:06:17.349825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 44.048 ms 00:46:09.773 [2024-11-20 14:06:17.349834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.349902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.773 [2024-11-20 14:06:17.349913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:46:09.773 [2024-11-20 14:06:17.349929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:46:09.773 [2024-11-20 14:06:17.349938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.350052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:09.773 [2024-11-20 14:06:17.350063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:46:09.773 [2024-11-20 14:06:17.350078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:46:09.773 [2024-11-20 14:06:17.350086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:09.773 [2024-11-20 14:06:17.351596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3822.767 ms, result 0 00:46:09.773 { 00:46:09.773 "name": "ftl", 00:46:09.773 "uuid": "5f8eea4c-b91c-4833-a9f5-b93bc1c846c0" 00:46:09.773 } 00:46:09.773 14:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:46:10.033 [2024-11-20 14:06:17.581973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:46:10.033 14:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:46:10.293 14:06:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:46:10.553 [2024-11-20 14:06:18.013587] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:46:10.553 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:46:10.553 [2024-11-20 14:06:18.228497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:46:10.553 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:11.123 Fill FTL, iteration 1 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=83947 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 83947 /var/tmp/spdk.tgt.sock 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 83947 ']' 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:11.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:11.123 14:06:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:11.123 [2024-11-20 14:06:18.705305] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:11.123 [2024-11-20 14:06:18.705450] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83947 ] 00:46:11.382 [2024-11-20 14:06:18.883962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:11.382 [2024-11-20 14:06:19.030746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:12.763 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:46:12.763 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:46:12.763 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:46:12.763 ftln1 00:46:12.763 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:46:12.764 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 83947 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83947 ']' 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83947 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83947 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:46:13.023 killing process with pid 83947 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83947' 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83947 00:46:13.023 14:06:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83947 00:46:15.599 14:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:46:15.599 14:06:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:46:15.859 [2024-11-20 14:06:23.335235] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:15.859 [2024-11-20 14:06:23.335980] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84006 ] 00:46:15.859 [2024-11-20 14:06:23.517688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:16.118 [2024-11-20 14:06:23.663041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:17.499  [2024-11-20T14:06:26.601Z] Copying: 242/1024 [MB] (242 MBps) [2024-11-20T14:06:27.540Z] Copying: 480/1024 [MB] (238 MBps) [2024-11-20T14:06:28.480Z] Copying: 724/1024 [MB] (244 MBps) [2024-11-20T14:06:28.480Z] Copying: 959/1024 [MB] (235 MBps) [2024-11-20T14:06:29.858Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:46:22.139 00:46:22.139 Calculate MD5 checksum, iteration 1 00:46:22.139 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:46:22.139 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:46:22.139 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:46:22.139 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:22.140 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:22.140 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:22.140 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:22.140 14:06:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:46:22.399 [2024-11-20 14:06:29.895535] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:22.399 [2024-11-20 14:06:29.895682] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84074 ] 00:46:22.399 [2024-11-20 14:06:30.071921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:22.658 [2024-11-20 14:06:30.225417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:24.565  [2024-11-20T14:06:32.543Z] Copying: 630/1024 [MB] (630 MBps) [2024-11-20T14:06:33.933Z] Copying: 1024/1024 [MB] (average 630 MBps) 00:46:26.214 00:46:26.214 14:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:46:26.214 14:06:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:46:28.121 Fill FTL, iteration 2 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c69469ef8eba3dbd4c3f1c8f3cfb6883 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:28.121 14:06:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:46:28.121 [2024-11-20 14:06:35.507631] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:28.121 [2024-11-20 14:06:35.507954] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84137 ] 00:46:28.121 [2024-11-20 14:06:35.686910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:28.121 [2024-11-20 14:06:35.829390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:30.026  [2024-11-20T14:06:38.680Z] Copying: 236/1024 [MB] (236 MBps) [2024-11-20T14:06:39.616Z] Copying: 474/1024 [MB] (238 MBps) [2024-11-20T14:06:40.551Z] Copying: 705/1024 [MB] (231 MBps) [2024-11-20T14:06:41.116Z] Copying: 929/1024 [MB] (224 MBps) [2024-11-20T14:06:42.488Z] Copying: 1024/1024 [MB] (average 229 MBps) 00:46:34.769 00:46:34.769 Calculate MD5 checksum, iteration 2 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:46:34.769 14:06:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:46:34.769 [2024-11-20 14:06:42.283206] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:34.769 [2024-11-20 14:06:42.283370] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84202 ] 00:46:34.769 [2024-11-20 14:06:42.464163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:35.027 [2024-11-20 14:06:42.610342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:46:36.927  [2024-11-20T14:06:45.215Z] Copying: 583/1024 [MB] (583 MBps) [2024-11-20T14:06:47.124Z] Copying: 1024/1024 [MB] (average 574 MBps) 00:46:39.405 00:46:39.405 14:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:46:39.405 14:06:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fd7d2aa01fdfb538d4608d0ffbb190cf 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:46:41.334 [2024-11-20 14:06:48.776110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.334 [2024-11-20 14:06:48.776199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:41.334 [2024-11-20 14:06:48.776236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:46:41.334 [2024-11-20 14:06:48.776249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.334 [2024-11-20 14:06:48.776289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.334 [2024-11-20 14:06:48.776303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:41.334 [2024-11-20 14:06:48.776323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:46:41.334 [2024-11-20 14:06:48.776335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.334 [2024-11-20 14:06:48.776360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.334 [2024-11-20 14:06:48.776373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:41.334 [2024-11-20 14:06:48.776384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:46:41.334 [2024-11-20 14:06:48.776395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.334 [2024-11-20 14:06:48.776477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.369 ms, result 0 00:46:41.334 true 00:46:41.334 14:06:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:41.334 { 00:46:41.334 "name": "ftl", 00:46:41.334 "properties": [ 00:46:41.334 { 00:46:41.334 "name": "superblock_version", 00:46:41.334 "value": 5, 00:46:41.334 "read-only": true 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "name": "base_device", 00:46:41.334 "bands": [ 00:46:41.334 { 00:46:41.334 "id": 0, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 1, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 2, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 3, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 4, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 5, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 6, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 7, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 8, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 9, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 10, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 11, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 12, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 13, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 14, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 15, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 16, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 17, 00:46:41.334 "state": "FREE", 00:46:41.334 "validity": 0.0 00:46:41.334 } 00:46:41.334 ], 00:46:41.334 "read-only": true 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "name": "cache_device", 00:46:41.334 "type": "bdev", 00:46:41.334 "chunks": [ 00:46:41.334 { 00:46:41.334 "id": 0, 00:46:41.334 "state": "INACTIVE", 00:46:41.334 "utilization": 0.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 1, 00:46:41.334 "state": "CLOSED", 00:46:41.334 "utilization": 1.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 2, 00:46:41.334 "state": "CLOSED", 00:46:41.334 "utilization": 1.0 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 3, 00:46:41.334 "state": "OPEN", 00:46:41.334 "utilization": 0.001953125 00:46:41.334 }, 00:46:41.334 { 00:46:41.334 "id": 4, 00:46:41.334 "state": "OPEN", 00:46:41.334 "utilization": 0.0 00:46:41.334 } 00:46:41.334 ], 00:46:41.334 "read-only": true 00:46:41.334 }, 00:46:41.334 { 00:46:41.335 "name": "verbose_mode", 00:46:41.335 "value": true, 00:46:41.335 "unit": "", 00:46:41.335 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:46:41.335 }, 00:46:41.335 { 00:46:41.335 "name": "prep_upgrade_on_shutdown", 00:46:41.335 "value": false, 00:46:41.335 "unit": "", 00:46:41.335 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:46:41.335 } 00:46:41.335 ] 00:46:41.335 } 00:46:41.335 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:46:41.595 [2024-11-20 14:06:49.227837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.595 [2024-11-20 14:06:49.227934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:41.595 [2024-11-20 14:06:49.227953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:46:41.595 [2024-11-20 14:06:49.227964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.595 [2024-11-20 14:06:49.227999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.595 [2024-11-20 14:06:49.228011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:41.595 [2024-11-20 14:06:49.228021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:46:41.595 [2024-11-20 14:06:49.228032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.595 [2024-11-20 14:06:49.228054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:41.595 [2024-11-20 14:06:49.228064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:41.595 [2024-11-20 14:06:49.228073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:46:41.595 [2024-11-20 14:06:49.228082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:41.595 [2024-11-20 14:06:49.228161] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.319 ms, result 0 00:46:41.595 true 00:46:41.595 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:46:41.595 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:46:41.595 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:41.855 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:46:41.855 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:46:41.855 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:46:42.115 [2024-11-20 14:06:49.703379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:42.115 [2024-11-20 14:06:49.703468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:46:42.115 [2024-11-20 14:06:49.703489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:46:42.115 [2024-11-20 14:06:49.703502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:42.115 [2024-11-20 14:06:49.703539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:42.115 [2024-11-20 14:06:49.703552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:46:42.115 [2024-11-20 14:06:49.703563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:46:42.115 [2024-11-20 14:06:49.703574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:42.115 [2024-11-20 14:06:49.703599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:42.115 [2024-11-20 14:06:49.703611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:46:42.115 [2024-11-20 14:06:49.703622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:46:42.115 [2024-11-20 14:06:49.703633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:42.115 [2024-11-20 14:06:49.703735] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.332 ms, result 0 00:46:42.115 true 00:46:42.115 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:46:42.375 { 00:46:42.375 "name": "ftl", 00:46:42.375 "properties": [ 00:46:42.375 { 00:46:42.375 "name": "superblock_version", 00:46:42.375 "value": 5, 00:46:42.375 "read-only": true 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "name": "base_device", 00:46:42.375 "bands": [ 00:46:42.375 { 00:46:42.375 "id": 0, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 1, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 2, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 3, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 4, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 5, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 6, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 7, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 8, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 9, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 10, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 11, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 12, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 13, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 14, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 15, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 16, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 17, 00:46:42.375 "state": "FREE", 00:46:42.375 "validity": 0.0 00:46:42.375 } 00:46:42.375 ], 00:46:42.375 "read-only": true 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "name": "cache_device", 00:46:42.375 "type": "bdev", 00:46:42.375 "chunks": [ 00:46:42.375 { 00:46:42.375 "id": 0, 00:46:42.375 "state": "INACTIVE", 00:46:42.375 "utilization": 0.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 1, 00:46:42.375 "state": "CLOSED", 00:46:42.375 "utilization": 1.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 2, 00:46:42.375 "state": "CLOSED", 00:46:42.375 "utilization": 1.0 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 3, 00:46:42.375 "state": "OPEN", 00:46:42.375 "utilization": 0.001953125 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "id": 4, 00:46:42.375 "state": "OPEN", 00:46:42.375 "utilization": 0.0 00:46:42.375 } 00:46:42.375 ], 00:46:42.375 "read-only": true 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "name": "verbose_mode", 00:46:42.375 "value": true, 00:46:42.375 "unit": "", 00:46:42.375 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:46:42.375 }, 00:46:42.375 { 00:46:42.375 "name": "prep_upgrade_on_shutdown", 00:46:42.375 "value": true, 00:46:42.375 "unit": "", 00:46:42.375 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:46:42.375 } 00:46:42.375 ] 00:46:42.375 } 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 83817 ]] 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 83817 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 83817 ']' 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 83817 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83817 00:46:42.375 killing process with pid 83817 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83817' 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 83817 00:46:42.375 14:06:49 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 83817 00:46:43.755 [2024-11-20 14:06:51.253702] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:46:43.755 [2024-11-20 14:06:51.276270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:43.755 [2024-11-20 14:06:51.276348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:46:43.755 [2024-11-20 14:06:51.276367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:46:43.755 [2024-11-20 14:06:51.276379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:43.755 [2024-11-20 14:06:51.276408] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:46:43.755 [2024-11-20 14:06:51.281537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:43.755 [2024-11-20 14:06:51.281576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:46:43.755 [2024-11-20 14:06:51.281589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.120 ms 00:46:43.755 [2024-11-20 14:06:51.281615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.710449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.710532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:46:51.881 [2024-11-20 14:06:58.710551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7443.109 ms 00:46:51.881 [2024-11-20 14:06:58.710567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.711833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.711894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:46:51.881 [2024-11-20 14:06:58.711906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.246 ms 00:46:51.881 [2024-11-20 14:06:58.711917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.713016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.713047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:46:51.881 [2024-11-20 14:06:58.713058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.068 ms 00:46:51.881 [2024-11-20 14:06:58.713082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.731100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.731173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:46:51.881 [2024-11-20 14:06:58.731188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.982 ms 00:46:51.881 [2024-11-20 14:06:58.731200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.741909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.741985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:46:51.881 [2024-11-20 14:06:58.742001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.671 ms 00:46:51.881 [2024-11-20 14:06:58.742011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.742160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.742185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:46:51.881 [2024-11-20 14:06:58.742207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.090 ms 00:46:51.881 [2024-11-20 14:06:58.742217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.760109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.760182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:46:51.881 [2024-11-20 14:06:58.760197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.902 ms 00:46:51.881 [2024-11-20 14:06:58.760207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.778193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.778269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:46:51.881 [2024-11-20 14:06:58.778285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.966 ms 00:46:51.881 [2024-11-20 14:06:58.778297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.881 [2024-11-20 14:06:58.795909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.881 [2024-11-20 14:06:58.795985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:46:51.882 [2024-11-20 14:06:58.796000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.587 ms 00:46:51.882 [2024-11-20 14:06:58.796008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.814367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.882 [2024-11-20 14:06:58.814437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:46:51.882 [2024-11-20 14:06:58.814453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.262 ms 00:46:51.882 [2024-11-20 14:06:58.814462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.814517] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:46:51.882 [2024-11-20 14:06:58.814540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:46:51.882 [2024-11-20 14:06:58.814553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:46:51.882 [2024-11-20 14:06:58.814599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:46:51.882 [2024-11-20 14:06:58.814611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:46:51.882 [2024-11-20 14:06:58.814776] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:46:51.882 [2024-11-20 14:06:58.814786] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5f8eea4c-b91c-4833-a9f5-b93bc1c846c0 00:46:51.882 [2024-11-20 14:06:58.814796] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:46:51.882 [2024-11-20 14:06:58.814806] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:46:51.882 [2024-11-20 14:06:58.814815] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:46:51.882 [2024-11-20 14:06:58.814825] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:46:51.882 [2024-11-20 14:06:58.814834] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:46:51.882 [2024-11-20 14:06:58.814850] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:46:51.882 [2024-11-20 14:06:58.814860] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:46:51.882 [2024-11-20 14:06:58.814869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:46:51.882 [2024-11-20 14:06:58.814877] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:46:51.882 [2024-11-20 14:06:58.814887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.882 [2024-11-20 14:06:58.814909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:46:51.882 [2024-11-20 14:06:58.814922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.374 ms 00:46:51.882 [2024-11-20 14:06:58.814932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.838781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.882 [2024-11-20 14:06:58.838848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:46:51.882 [2024-11-20 14:06:58.838862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.837 ms 00:46:51.882 [2024-11-20 14:06:58.838884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.839526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:51.882 [2024-11-20 14:06:58.839555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:46:51.882 [2024-11-20 14:06:58.839567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.592 ms 00:46:51.882 [2024-11-20 14:06:58.839577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.916560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:58.916630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:46:51.882 [2024-11-20 14:06:58.916652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:58.916661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.916738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:58.916752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:46:51.882 [2024-11-20 14:06:58.916762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:58.916772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.916903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:58.916946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:46:51.882 [2024-11-20 14:06:58.916958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:58.916972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:58.916993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:58.917011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:46:51.882 [2024-11-20 14:06:58.917021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:58.917030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.063297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.063373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:46:51.882 [2024-11-20 14:06:59.063397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.063408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:46:51.882 [2024-11-20 14:06:59.183259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:46:51.882 [2024-11-20 14:06:59.183424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:46:51.882 [2024-11-20 14:06:59.183546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:46:51.882 [2024-11-20 14:06:59.183750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:46:51.882 [2024-11-20 14:06:59.183853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.183933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.183952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:46:51.882 [2024-11-20 14:06:59.183961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.183971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.184027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:46:51.882 [2024-11-20 14:06:59.184049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:46:51.882 [2024-11-20 14:06:59.184060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:46:51.882 [2024-11-20 14:06:59.184069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:51.882 [2024-11-20 14:06:59.184215] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 7923.167 ms, result 0 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:46:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84452 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84452 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84452 ']' 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:46:58.511 14:07:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:46:58.512 [2024-11-20 14:07:05.621549] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:46:58.512 [2024-11-20 14:07:05.621686] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84452 ] 00:46:58.512 [2024-11-20 14:07:05.788686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:46:58.512 [2024-11-20 14:07:05.908586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:46:59.452 [2024-11-20 14:07:06.882198] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:46:59.452 [2024-11-20 14:07:06.882277] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:46:59.452 [2024-11-20 14:07:07.028429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.028487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:46:59.452 [2024-11-20 14:07:07.028502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:46:59.452 [2024-11-20 14:07:07.028511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.028589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.028602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:46:59.452 [2024-11-20 14:07:07.028612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:46:59.452 [2024-11-20 14:07:07.028620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.028645] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:46:59.452 [2024-11-20 14:07:07.029596] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:46:59.452 [2024-11-20 14:07:07.029628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.029637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:46:59.452 [2024-11-20 14:07:07.029647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.992 ms 00:46:59.452 [2024-11-20 14:07:07.029655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.031108] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:46:59.452 [2024-11-20 14:07:07.050816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.050870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:46:59.452 [2024-11-20 14:07:07.050889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.746 ms 00:46:59.452 [2024-11-20 14:07:07.050906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.050968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.050980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:46:59.452 [2024-11-20 14:07:07.050990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:46:59.452 [2024-11-20 14:07:07.050998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.057736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.057766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:46:59.452 [2024-11-20 14:07:07.057776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.673 ms 00:46:59.452 [2024-11-20 14:07:07.057784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.057850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.057864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:46:59.452 [2024-11-20 14:07:07.057873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:46:59.452 [2024-11-20 14:07:07.057881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.057930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.057941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:46:59.452 [2024-11-20 14:07:07.057952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:46:59.452 [2024-11-20 14:07:07.057960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.057986] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:46:59.452 [2024-11-20 14:07:07.062790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.062820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:46:59.452 [2024-11-20 14:07:07.062830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.821 ms 00:46:59.452 [2024-11-20 14:07:07.062841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.062867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.062876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:46:59.452 [2024-11-20 14:07:07.062885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:46:59.452 [2024-11-20 14:07:07.062893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.062943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:46:59.452 [2024-11-20 14:07:07.062967] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:46:59.452 [2024-11-20 14:07:07.063004] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:46:59.452 [2024-11-20 14:07:07.063020] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:46:59.452 [2024-11-20 14:07:07.063110] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:46:59.452 [2024-11-20 14:07:07.063136] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:46:59.452 [2024-11-20 14:07:07.063148] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:46:59.452 [2024-11-20 14:07:07.063160] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:46:59.452 [2024-11-20 14:07:07.063169] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:46:59.452 [2024-11-20 14:07:07.063181] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:46:59.452 [2024-11-20 14:07:07.063189] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:46:59.452 [2024-11-20 14:07:07.063197] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:46:59.452 [2024-11-20 14:07:07.063205] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:46:59.452 [2024-11-20 14:07:07.063212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.063220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:46:59.452 [2024-11-20 14:07:07.063228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.273 ms 00:46:59.452 [2024-11-20 14:07:07.063236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.063325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.452 [2024-11-20 14:07:07.063342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:46:59.452 [2024-11-20 14:07:07.063351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:46:59.452 [2024-11-20 14:07:07.063362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.452 [2024-11-20 14:07:07.063455] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:46:59.452 [2024-11-20 14:07:07.063469] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:46:59.452 [2024-11-20 14:07:07.063478] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:59.452 [2024-11-20 14:07:07.063486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.452 [2024-11-20 14:07:07.063497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:46:59.452 [2024-11-20 14:07:07.063504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:46:59.452 [2024-11-20 14:07:07.063512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:46:59.452 [2024-11-20 14:07:07.063521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:46:59.452 [2024-11-20 14:07:07.063528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:46:59.452 [2024-11-20 14:07:07.063536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.452 [2024-11-20 14:07:07.063543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:46:59.452 [2024-11-20 14:07:07.063551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:46:59.453 [2024-11-20 14:07:07.063558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:46:59.453 [2024-11-20 14:07:07.063574] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:46:59.453 [2024-11-20 14:07:07.063581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063588] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:46:59.453 [2024-11-20 14:07:07.063595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:46:59.453 [2024-11-20 14:07:07.063602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:46:59.453 [2024-11-20 14:07:07.063617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063633] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:46:59.453 [2024-11-20 14:07:07.063640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:46:59.453 [2024-11-20 14:07:07.063677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063691] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:46:59.453 [2024-11-20 14:07:07.063698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:46:59.453 [2024-11-20 14:07:07.063733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:46:59.453 [2024-11-20 14:07:07.063754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063762] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:46:59.453 [2024-11-20 14:07:07.063775] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063783] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:46:59.453 [2024-11-20 14:07:07.063802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:46:59.453 [2024-11-20 14:07:07.063809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063816] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:46:59.453 [2024-11-20 14:07:07.063824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:46:59.453 [2024-11-20 14:07:07.063832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:46:59.453 [2024-11-20 14:07:07.063853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:46:59.453 [2024-11-20 14:07:07.063861] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:46:59.453 [2024-11-20 14:07:07.063867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:46:59.453 [2024-11-20 14:07:07.063883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:46:59.453 [2024-11-20 14:07:07.063890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:46:59.453 [2024-11-20 14:07:07.063897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:46:59.453 [2024-11-20 14:07:07.063913] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:46:59.453 [2024-11-20 14:07:07.063923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.063932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:46:59.453 [2024-11-20 14:07:07.063941] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.063949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.063957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:46:59.453 [2024-11-20 14:07:07.063964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:46:59.453 [2024-11-20 14:07:07.063972] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:46:59.453 [2024-11-20 14:07:07.063979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:46:59.453 [2024-11-20 14:07:07.063988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.063995] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064002] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:46:59.453 [2024-11-20 14:07:07.064040] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:46:59.453 [2024-11-20 14:07:07.064050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064058] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:46:59.453 [2024-11-20 14:07:07.064066] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:46:59.453 [2024-11-20 14:07:07.064076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:46:59.453 [2024-11-20 14:07:07.064085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:46:59.453 [2024-11-20 14:07:07.064093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:46:59.453 [2024-11-20 14:07:07.064102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:46:59.453 [2024-11-20 14:07:07.064110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.695 ms 00:46:59.453 [2024-11-20 14:07:07.064119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:46:59.453 [2024-11-20 14:07:07.064170] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:46:59.453 [2024-11-20 14:07:07.064181] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:47:02.744 [2024-11-20 14:07:10.266873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.266956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:47:02.744 [2024-11-20 14:07:10.266971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3208.879 ms 00:47:02.744 [2024-11-20 14:07:10.266979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.306171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.306227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:02.744 [2024-11-20 14:07:10.306241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.952 ms 00:47:02.744 [2024-11-20 14:07:10.306249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.306381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.306399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:47:02.744 [2024-11-20 14:07:10.306410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:47:02.744 [2024-11-20 14:07:10.306419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.352460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.352517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:02.744 [2024-11-20 14:07:10.352531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.087 ms 00:47:02.744 [2024-11-20 14:07:10.352542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.352604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.352613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:02.744 [2024-11-20 14:07:10.352622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:02.744 [2024-11-20 14:07:10.352630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.353185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.353214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:02.744 [2024-11-20 14:07:10.353224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.439 ms 00:47:02.744 [2024-11-20 14:07:10.353233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.353287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.353298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:02.744 [2024-11-20 14:07:10.353306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:47:02.744 [2024-11-20 14:07:10.353315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.373834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.373886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:02.744 [2024-11-20 14:07:10.373899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.534 ms 00:47:02.744 [2024-11-20 14:07:10.373908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.407388] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:47:02.744 [2024-11-20 14:07:10.407441] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:47:02.744 [2024-11-20 14:07:10.407456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.407465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:47:02.744 [2024-11-20 14:07:10.407476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.451 ms 00:47:02.744 [2024-11-20 14:07:10.407484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.427533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.427584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:47:02.744 [2024-11-20 14:07:10.427597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.029 ms 00:47:02.744 [2024-11-20 14:07:10.427606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:02.744 [2024-11-20 14:07:10.446484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:02.744 [2024-11-20 14:07:10.446532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:47:02.744 [2024-11-20 14:07:10.446543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.859 ms 00:47:02.744 [2024-11-20 14:07:10.446551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.465333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.465380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:47:03.004 [2024-11-20 14:07:10.465393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.774 ms 00:47:03.004 [2024-11-20 14:07:10.465401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.466318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.466355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:47:03.004 [2024-11-20 14:07:10.466367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.787 ms 00:47:03.004 [2024-11-20 14:07:10.466376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.553989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.554065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:47:03.004 [2024-11-20 14:07:10.554079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 87.754 ms 00:47:03.004 [2024-11-20 14:07:10.554088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.566733] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:47:03.004 [2024-11-20 14:07:10.567834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.567878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:47:03.004 [2024-11-20 14:07:10.567892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.677 ms 00:47:03.004 [2024-11-20 14:07:10.567900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.568026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.568050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:47:03.004 [2024-11-20 14:07:10.568061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:47:03.004 [2024-11-20 14:07:10.568069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.568153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.568172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:47:03.004 [2024-11-20 14:07:10.568182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:47:03.004 [2024-11-20 14:07:10.568190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.568216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.568227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:47:03.004 [2024-11-20 14:07:10.568239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:47:03.004 [2024-11-20 14:07:10.568247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.568279] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:47:03.004 [2024-11-20 14:07:10.568290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.568298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:47:03.004 [2024-11-20 14:07:10.568306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:47:03.004 [2024-11-20 14:07:10.568315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.004 [2024-11-20 14:07:10.605086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.004 [2024-11-20 14:07:10.605145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:47:03.004 [2024-11-20 14:07:10.605158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 36.816 ms 00:47:03.005 [2024-11-20 14:07:10.605166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.005 [2024-11-20 14:07:10.605256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.005 [2024-11-20 14:07:10.605267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:47:03.005 [2024-11-20 14:07:10.605277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:47:03.005 [2024-11-20 14:07:10.605285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.005 [2024-11-20 14:07:10.606541] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3584.487 ms, result 0 00:47:03.005 [2024-11-20 14:07:10.621436] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:03.005 [2024-11-20 14:07:10.637423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:47:03.005 [2024-11-20 14:07:10.646703] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:03.005 14:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:03.005 14:07:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:47:03.005 14:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:03.005 14:07:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:47:03.005 14:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:47:03.264 [2024-11-20 14:07:10.878314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.264 [2024-11-20 14:07:10.878372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:47:03.264 [2024-11-20 14:07:10.878387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:47:03.264 [2024-11-20 14:07:10.878398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.264 [2024-11-20 14:07:10.878427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.264 [2024-11-20 14:07:10.878435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:47:03.264 [2024-11-20 14:07:10.878443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:03.264 [2024-11-20 14:07:10.878451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.264 [2024-11-20 14:07:10.878469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:03.264 [2024-11-20 14:07:10.878477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:47:03.264 [2024-11-20 14:07:10.878485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:03.264 [2024-11-20 14:07:10.878494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:03.264 [2024-11-20 14:07:10.878555] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.241 ms, result 0 00:47:03.264 true 00:47:03.264 14:07:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:03.524 { 00:47:03.524 "name": "ftl", 00:47:03.524 "properties": [ 00:47:03.524 { 00:47:03.524 "name": "superblock_version", 00:47:03.524 "value": 5, 00:47:03.524 "read-only": true 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "name": "base_device", 00:47:03.524 "bands": [ 00:47:03.524 { 00:47:03.524 "id": 0, 00:47:03.524 "state": "CLOSED", 00:47:03.524 "validity": 1.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 1, 00:47:03.524 "state": "CLOSED", 00:47:03.524 "validity": 1.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 2, 00:47:03.524 "state": "CLOSED", 00:47:03.524 "validity": 0.007843137254901933 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 3, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 4, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 5, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 6, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 7, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 8, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 9, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 10, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 11, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 12, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 13, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 14, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 15, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 16, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 17, 00:47:03.524 "state": "FREE", 00:47:03.524 "validity": 0.0 00:47:03.524 } 00:47:03.524 ], 00:47:03.524 "read-only": true 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "name": "cache_device", 00:47:03.524 "type": "bdev", 00:47:03.524 "chunks": [ 00:47:03.524 { 00:47:03.524 "id": 0, 00:47:03.524 "state": "INACTIVE", 00:47:03.524 "utilization": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 1, 00:47:03.524 "state": "OPEN", 00:47:03.524 "utilization": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 2, 00:47:03.524 "state": "OPEN", 00:47:03.524 "utilization": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 3, 00:47:03.524 "state": "FREE", 00:47:03.524 "utilization": 0.0 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "id": 4, 00:47:03.524 "state": "FREE", 00:47:03.524 "utilization": 0.0 00:47:03.524 } 00:47:03.524 ], 00:47:03.524 "read-only": true 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "name": "verbose_mode", 00:47:03.524 "value": true, 00:47:03.524 "unit": "", 00:47:03.524 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:47:03.524 }, 00:47:03.524 { 00:47:03.524 "name": "prep_upgrade_on_shutdown", 00:47:03.524 "value": false, 00:47:03.524 "unit": "", 00:47:03.524 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:47:03.524 } 00:47:03.524 ] 00:47:03.524 } 00:47:03.524 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:47:03.525 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:47:03.525 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:03.784 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:47:03.784 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:47:03.784 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:47:03.784 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:47:03.784 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:47:04.045 Validate MD5 checksum, iteration 1 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:04.045 14:07:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:04.045 [2024-11-20 14:07:11.606329] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:04.045 [2024-11-20 14:07:11.606451] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84533 ] 00:47:04.304 [2024-11-20 14:07:11.780193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:04.304 [2024-11-20 14:07:11.913543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:06.211  [2024-11-20T14:07:14.496Z] Copying: 628/1024 [MB] (628 MBps) [2024-11-20T14:07:16.412Z] Copying: 1024/1024 [MB] (average 612 MBps) 00:47:08.693 00:47:08.693 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:47:08.693 14:07:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c69469ef8eba3dbd4c3f1c8f3cfb6883 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c69469ef8eba3dbd4c3f1c8f3cfb6883 != \c\6\9\4\6\9\e\f\8\e\b\a\3\d\b\d\4\c\3\f\1\c\8\f\3\c\f\b\6\8\8\3 ]] 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:47:10.088 Validate MD5 checksum, iteration 2 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:10.088 14:07:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:10.088 [2024-11-20 14:07:17.730622] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:10.088 [2024-11-20 14:07:17.730829] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84598 ] 00:47:10.347 [2024-11-20 14:07:17.906245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:10.347 [2024-11-20 14:07:18.053497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:12.253  [2024-11-20T14:07:20.539Z] Copying: 615/1024 [MB] (615 MBps) [2024-11-20T14:07:21.915Z] Copying: 1024/1024 [MB] (average 611 MBps) 00:47:14.196 00:47:14.196 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:47:14.196 14:07:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd7d2aa01fdfb538d4608d0ffbb190cf 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd7d2aa01fdfb538d4608d0ffbb190cf != \f\d\7\d\2\a\a\0\1\f\d\f\b\5\3\8\d\4\6\0\8\d\0\f\f\b\b\1\9\0\c\f ]] 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 84452 ]] 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 84452 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84656 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84656 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84656 ']' 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:16.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:16.140 14:07:23 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:16.140 [2024-11-20 14:07:23.745172] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:16.140 [2024-11-20 14:07:23.745333] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84656 ] 00:47:16.400 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 84452 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:47:16.400 [2024-11-20 14:07:23.928212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:16.400 [2024-11-20 14:07:24.070880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:17.777 [2024-11-20 14:07:25.198495] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:17.777 [2024-11-20 14:07:25.198570] bdev.c:8475:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:47:17.777 [2024-11-20 14:07:25.347284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.347372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:47:17.777 [2024-11-20 14:07:25.347389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:47:17.777 [2024-11-20 14:07:25.347401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.347491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.347506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:17.777 [2024-11-20 14:07:25.347517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:47:17.777 [2024-11-20 14:07:25.347527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.347554] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:47:17.777 [2024-11-20 14:07:25.348607] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:47:17.777 [2024-11-20 14:07:25.348643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.348654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:17.777 [2024-11-20 14:07:25.348665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.097 ms 00:47:17.777 [2024-11-20 14:07:25.348675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.349174] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:47:17.777 [2024-11-20 14:07:25.376640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.376720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:47:17.777 [2024-11-20 14:07:25.376740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.516 ms 00:47:17.777 [2024-11-20 14:07:25.376752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.394132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.394193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:47:17.777 [2024-11-20 14:07:25.394214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:47:17.777 [2024-11-20 14:07:25.394224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.394767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.394793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:17.777 [2024-11-20 14:07:25.394808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:47:17.777 [2024-11-20 14:07:25.394819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.394905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.394938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:17.777 [2024-11-20 14:07:25.394951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:47:17.777 [2024-11-20 14:07:25.394963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.395006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.395020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:47:17.777 [2024-11-20 14:07:25.395031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:47:17.777 [2024-11-20 14:07:25.395041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.395078] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:47:17.777 [2024-11-20 14:07:25.400710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.400767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:17.777 [2024-11-20 14:07:25.400783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.652 ms 00:47:17.777 [2024-11-20 14:07:25.400795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.400846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.400861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:47:17.777 [2024-11-20 14:07:25.400873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:17.777 [2024-11-20 14:07:25.400884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.400947] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:47:17.777 [2024-11-20 14:07:25.400978] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:47:17.777 [2024-11-20 14:07:25.401024] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:47:17.777 [2024-11-20 14:07:25.401049] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:47:17.777 [2024-11-20 14:07:25.401168] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:47:17.777 [2024-11-20 14:07:25.401186] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:47:17.777 [2024-11-20 14:07:25.401200] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:47:17.777 [2024-11-20 14:07:25.401215] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:47:17.777 [2024-11-20 14:07:25.401228] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:47:17.777 [2024-11-20 14:07:25.401241] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:47:17.777 [2024-11-20 14:07:25.401252] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:47:17.777 [2024-11-20 14:07:25.401262] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:47:17.777 [2024-11-20 14:07:25.401272] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:47:17.777 [2024-11-20 14:07:25.401284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.401299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:47:17.777 [2024-11-20 14:07:25.401311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.342 ms 00:47:17.777 [2024-11-20 14:07:25.401322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.401415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.777 [2024-11-20 14:07:25.401451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:47:17.777 [2024-11-20 14:07:25.401464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.065 ms 00:47:17.777 [2024-11-20 14:07:25.401475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.777 [2024-11-20 14:07:25.401588] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:47:17.777 [2024-11-20 14:07:25.401609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:47:17.777 [2024-11-20 14:07:25.401628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:17.777 [2024-11-20 14:07:25.401640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:47:17.777 [2024-11-20 14:07:25.401662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:47:17.777 [2024-11-20 14:07:25.401683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:47:17.777 [2024-11-20 14:07:25.401696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:47:17.777 [2024-11-20 14:07:25.401711] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:47:17.777 [2024-11-20 14:07:25.401763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:47:17.777 [2024-11-20 14:07:25.401775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:47:17.777 [2024-11-20 14:07:25.401798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:47:17.777 [2024-11-20 14:07:25.401809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:47:17.777 [2024-11-20 14:07:25.401829] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:47:17.777 [2024-11-20 14:07:25.401839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.777 [2024-11-20 14:07:25.401849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:47:17.777 [2024-11-20 14:07:25.401860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:47:17.777 [2024-11-20 14:07:25.401870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:17.777 [2024-11-20 14:07:25.401880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:47:17.777 [2024-11-20 14:07:25.401915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:47:17.778 [2024-11-20 14:07:25.401933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:17.778 [2024-11-20 14:07:25.401946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:47:17.778 [2024-11-20 14:07:25.401957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:47:17.778 [2024-11-20 14:07:25.401967] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:17.778 [2024-11-20 14:07:25.401977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:47:17.778 [2024-11-20 14:07:25.401987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:47:17.778 [2024-11-20 14:07:25.401997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:47:17.778 [2024-11-20 14:07:25.402008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:47:17.778 [2024-11-20 14:07:25.402017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:47:17.778 [2024-11-20 14:07:25.402027] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402037] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:47:17.778 [2024-11-20 14:07:25.402048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:47:17.778 [2024-11-20 14:07:25.402058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:47:17.778 [2024-11-20 14:07:25.402077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402097] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:47:17.778 [2024-11-20 14:07:25.402106] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:47:17.778 [2024-11-20 14:07:25.402117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402126] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:47:17.778 [2024-11-20 14:07:25.402139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:47:17.778 [2024-11-20 14:07:25.402150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:47:17.778 [2024-11-20 14:07:25.402161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:47:17.778 [2024-11-20 14:07:25.402173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:47:17.778 [2024-11-20 14:07:25.402183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:47:17.778 [2024-11-20 14:07:25.402193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:47:17.778 [2024-11-20 14:07:25.402203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:47:17.778 [2024-11-20 14:07:25.402213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:47:17.778 [2024-11-20 14:07:25.402223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:47:17.778 [2024-11-20 14:07:25.402235] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:47:17.778 [2024-11-20 14:07:25.402249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:47:17.778 [2024-11-20 14:07:25.402272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:47:17.778 [2024-11-20 14:07:25.402318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:47:17.778 [2024-11-20 14:07:25.402335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:47:17.778 [2024-11-20 14:07:25.402346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:47:17.778 [2024-11-20 14:07:25.402357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:47:17.778 [2024-11-20 14:07:25.402433] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:47:17.778 [2024-11-20 14:07:25.402445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:47:17.778 [2024-11-20 14:07:25.402474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:47:17.778 [2024-11-20 14:07:25.402486] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:47:17.778 [2024-11-20 14:07:25.402497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:47:17.778 [2024-11-20 14:07:25.402511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.778 [2024-11-20 14:07:25.402522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:47:17.778 [2024-11-20 14:07:25.402536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.989 ms 00:47:17.778 [2024-11-20 14:07:25.402547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.778 [2024-11-20 14:07:25.450150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.778 [2024-11-20 14:07:25.450220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:17.778 [2024-11-20 14:07:25.450238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 47.607 ms 00:47:17.778 [2024-11-20 14:07:25.450249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:17.778 [2024-11-20 14:07:25.450330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:17.778 [2024-11-20 14:07:25.450342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:47:17.778 [2024-11-20 14:07:25.450353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:47:17.778 [2024-11-20 14:07:25.450363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.037 [2024-11-20 14:07:25.504508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.037 [2024-11-20 14:07:25.504599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:18.037 [2024-11-20 14:07:25.504619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 54.097 ms 00:47:18.037 [2024-11-20 14:07:25.504630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.037 [2024-11-20 14:07:25.504743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.037 [2024-11-20 14:07:25.504756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:18.038 [2024-11-20 14:07:25.504768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:47:18.038 [2024-11-20 14:07:25.504786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.504973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.504995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:18.038 [2024-11-20 14:07:25.505024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.085 ms 00:47:18.038 [2024-11-20 14:07:25.505036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.505095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.505114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:18.038 [2024-11-20 14:07:25.505127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:47:18.038 [2024-11-20 14:07:25.505149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.531468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.531547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:18.038 [2024-11-20 14:07:25.531565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.322 ms 00:47:18.038 [2024-11-20 14:07:25.531583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.531824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.531877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:47:18.038 [2024-11-20 14:07:25.531899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:47:18.038 [2024-11-20 14:07:25.531911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.570859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.570943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:47:18.038 [2024-11-20 14:07:25.570961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.997 ms 00:47:18.038 [2024-11-20 14:07:25.570972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.586387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.586432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:47:18.038 [2024-11-20 14:07:25.586472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.633 ms 00:47:18.038 [2024-11-20 14:07:25.586482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.683311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.683409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:47:18.038 [2024-11-20 14:07:25.683437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 96.921 ms 00:47:18.038 [2024-11-20 14:07:25.683448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.683783] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:47:18.038 [2024-11-20 14:07:25.684027] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:47:18.038 [2024-11-20 14:07:25.684265] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:47:18.038 [2024-11-20 14:07:25.684452] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:47:18.038 [2024-11-20 14:07:25.684475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.684488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:47:18.038 [2024-11-20 14:07:25.684501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:47:18.038 [2024-11-20 14:07:25.684513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.684658] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:47:18.038 [2024-11-20 14:07:25.684681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.684700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:47:18.038 [2024-11-20 14:07:25.684713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:47:18.038 [2024-11-20 14:07:25.684747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.708942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.709031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:47:18.038 [2024-11-20 14:07:25.709048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.198 ms 00:47:18.038 [2024-11-20 14:07:25.709059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.724663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.724755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:47:18.038 [2024-11-20 14:07:25.724772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:47:18.038 [2024-11-20 14:07:25.724784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.038 [2024-11-20 14:07:25.724948] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:47:18.038 [2024-11-20 14:07:25.725315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.038 [2024-11-20 14:07:25.725334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:47:18.038 [2024-11-20 14:07:25.725345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.371 ms 00:47:18.038 [2024-11-20 14:07:25.725355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.606 [2024-11-20 14:07:26.282999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.606 [2024-11-20 14:07:26.283089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:47:18.606 [2024-11-20 14:07:26.283125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 557.345 ms 00:47:18.606 [2024-11-20 14:07:26.283137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.606 [2024-11-20 14:07:26.289452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.606 [2024-11-20 14:07:26.289500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:47:18.606 [2024-11-20 14:07:26.289515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.349 ms 00:47:18.606 [2024-11-20 14:07:26.289525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.606 [2024-11-20 14:07:26.289991] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:47:18.606 [2024-11-20 14:07:26.290027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.606 [2024-11-20 14:07:26.290037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:47:18.606 [2024-11-20 14:07:26.290049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.462 ms 00:47:18.606 [2024-11-20 14:07:26.290059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.606 [2024-11-20 14:07:26.290092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.606 [2024-11-20 14:07:26.290108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:47:18.606 [2024-11-20 14:07:26.290119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:18.606 [2024-11-20 14:07:26.290128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:18.606 [2024-11-20 14:07:26.290174] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 566.320 ms, result 0 00:47:18.606 [2024-11-20 14:07:26.290245] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:47:18.606 [2024-11-20 14:07:26.290429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:18.606 [2024-11-20 14:07:26.290445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:47:18.606 [2024-11-20 14:07:26.290456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.186 ms 00:47:18.606 [2024-11-20 14:07:26.290465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.838429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.838545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:47:19.174 [2024-11-20 14:07:26.838565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 547.700 ms 00:47:19.174 [2024-11-20 14:07:26.838575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.844957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.845018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:47:19.174 [2024-11-20 14:07:26.845033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.160 ms 00:47:19.174 [2024-11-20 14:07:26.845043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.845594] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:47:19.174 [2024-11-20 14:07:26.845624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.845635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:47:19.174 [2024-11-20 14:07:26.845647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.549 ms 00:47:19.174 [2024-11-20 14:07:26.845657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.845694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.845708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:47:19.174 [2024-11-20 14:07:26.845732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:19.174 [2024-11-20 14:07:26.845742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.845790] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 556.628 ms, result 0 00:47:19.174 [2024-11-20 14:07:26.845844] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:47:19.174 [2024-11-20 14:07:26.845857] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:47:19.174 [2024-11-20 14:07:26.845870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.845881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:47:19.174 [2024-11-20 14:07:26.845892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1123.111 ms 00:47:19.174 [2024-11-20 14:07:26.845902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.845939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.845952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:47:19.174 [2024-11-20 14:07:26.845970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:47:19.174 [2024-11-20 14:07:26.845980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.862151] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:47:19.174 [2024-11-20 14:07:26.862382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.862396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:47:19.174 [2024-11-20 14:07:26.862411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.412 ms 00:47:19.174 [2024-11-20 14:07:26.862421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.863152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.863178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:47:19.174 [2024-11-20 14:07:26.863196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.573 ms 00:47:19.174 [2024-11-20 14:07:26.863206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:47:19.174 [2024-11-20 14:07:26.865332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.056 ms 00:47:19.174 [2024-11-20 14:07:26.865342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:47:19.174 [2024-11-20 14:07:26.865422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:47:19.174 [2024-11-20 14:07:26.865439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:47:19.174 [2024-11-20 14:07:26.865597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:47:19.174 [2024-11-20 14:07:26.865606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:47:19.174 [2024-11-20 14:07:26.865652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:47:19.174 [2024-11-20 14:07:26.865662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865709] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:47:19.174 [2024-11-20 14:07:26.865733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:47:19.174 [2024-11-20 14:07:26.865752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:47:19.174 [2024-11-20 14:07:26.865761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.865823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:19.174 [2024-11-20 14:07:26.865835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:47:19.174 [2024-11-20 14:07:26.865845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:47:19.174 [2024-11-20 14:07:26.865854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:19.174 [2024-11-20 14:07:26.867351] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1522.411 ms, result 0 00:47:19.174 [2024-11-20 14:07:26.882903] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:47:19.433 [2024-11-20 14:07:26.898919] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:47:19.433 [2024-11-20 14:07:26.909899] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:19.433 Validate MD5 checksum, iteration 1 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:19.433 14:07:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:47:19.433 [2024-11-20 14:07:27.054771] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:19.433 [2024-11-20 14:07:27.054919] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84715 ] 00:47:19.692 [2024-11-20 14:07:27.230002] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:19.692 [2024-11-20 14:07:27.373856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:21.602  [2024-11-20T14:07:29.895Z] Copying: 601/1024 [MB] (601 MBps) [2024-11-20T14:07:31.803Z] Copying: 1024/1024 [MB] (average 602 MBps) 00:47:24.084 00:47:24.084 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:47:24.084 14:07:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c69469ef8eba3dbd4c3f1c8f3cfb6883 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c69469ef8eba3dbd4c3f1c8f3cfb6883 != \c\6\9\4\6\9\e\f\8\e\b\a\3\d\b\d\4\c\3\f\1\c\8\f\3\c\f\b\6\8\8\3 ]] 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:25.464 Validate MD5 checksum, iteration 2 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:47:25.464 14:07:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:47:25.464 [2024-11-20 14:07:33.175980] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:25.464 [2024-11-20 14:07:33.176085] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84779 ] 00:47:25.725 [2024-11-20 14:07:33.352589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:25.984 [2024-11-20 14:07:33.465444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:47:27.366  [2024-11-20T14:07:36.023Z] Copying: 559/1024 [MB] (559 MBps) [2024-11-20T14:07:37.405Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:47:29.686 00:47:29.686 14:07:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:47:29.686 14:07:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fd7d2aa01fdfb538d4608d0ffbb190cf 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fd7d2aa01fdfb538d4608d0ffbb190cf != \f\d\7\d\2\a\a\0\1\f\d\f\b\5\3\8\d\4\6\0\8\d\0\f\f\b\b\1\9\0\c\f ]] 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84656 ]] 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84656 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84656 ']' 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84656 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84656 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84656' 00:47:31.594 killing process with pid 84656 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84656 00:47:31.594 14:07:38 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84656 00:47:32.532 [2024-11-20 14:07:40.225622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:47:32.532 [2024-11-20 14:07:40.245238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.532 [2024-11-20 14:07:40.245308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:47:32.532 [2024-11-20 14:07:40.245326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:47:32.532 [2024-11-20 14:07:40.245337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.532 [2024-11-20 14:07:40.245365] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:47:32.532 [2024-11-20 14:07:40.250395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.532 [2024-11-20 14:07:40.250445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:47:32.532 [2024-11-20 14:07:40.250469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.021 ms 00:47:32.532 [2024-11-20 14:07:40.250480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.532 [2024-11-20 14:07:40.250766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.532 [2024-11-20 14:07:40.250790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:47:32.532 [2024-11-20 14:07:40.250803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.254 ms 00:47:32.532 [2024-11-20 14:07:40.250816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.253405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.253454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:47:32.793 [2024-11-20 14:07:40.253467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.571 ms 00:47:32.793 [2024-11-20 14:07:40.253477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.254637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.254671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:47:32.793 [2024-11-20 14:07:40.254686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.116 ms 00:47:32.793 [2024-11-20 14:07:40.254697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.271341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.271389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:47:32.793 [2024-11-20 14:07:40.271403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.595 ms 00:47:32.793 [2024-11-20 14:07:40.271438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.280187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.280231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:47:32.793 [2024-11-20 14:07:40.280245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.720 ms 00:47:32.793 [2024-11-20 14:07:40.280254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.280368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.280383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:47:32.793 [2024-11-20 14:07:40.280394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:47:32.793 [2024-11-20 14:07:40.280404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.295964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.296010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:47:32.793 [2024-11-20 14:07:40.296025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.560 ms 00:47:32.793 [2024-11-20 14:07:40.296036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.312128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.312176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:47:32.793 [2024-11-20 14:07:40.312208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.079 ms 00:47:32.793 [2024-11-20 14:07:40.312220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.327545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.327590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:47:32.793 [2024-11-20 14:07:40.327603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.307 ms 00:47:32.793 [2024-11-20 14:07:40.327612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.344371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.344423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:47:32.793 [2024-11-20 14:07:40.344438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.694 ms 00:47:32.793 [2024-11-20 14:07:40.344448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.344493] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:47:32.793 [2024-11-20 14:07:40.344513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:47:32.793 [2024-11-20 14:07:40.344527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:47:32.793 [2024-11-20 14:07:40.344539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:47:32.793 [2024-11-20 14:07:40.344550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:47:32.793 [2024-11-20 14:07:40.344723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:47:32.793 [2024-11-20 14:07:40.344736] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 5f8eea4c-b91c-4833-a9f5-b93bc1c846c0 00:47:32.793 [2024-11-20 14:07:40.344747] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:47:32.793 [2024-11-20 14:07:40.344759] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:47:32.793 [2024-11-20 14:07:40.344770] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:47:32.793 [2024-11-20 14:07:40.344780] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:47:32.793 [2024-11-20 14:07:40.344790] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:47:32.793 [2024-11-20 14:07:40.344802] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:47:32.793 [2024-11-20 14:07:40.344841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:47:32.793 [2024-11-20 14:07:40.344861] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:47:32.793 [2024-11-20 14:07:40.344872] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:47:32.793 [2024-11-20 14:07:40.344884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.344906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:47:32.793 [2024-11-20 14:07:40.344918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:47:32.793 [2024-11-20 14:07:40.344929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.367537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.367604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:47:32.793 [2024-11-20 14:07:40.367621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.595 ms 00:47:32.793 [2024-11-20 14:07:40.367633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.368324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:47:32.793 [2024-11-20 14:07:40.368354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:47:32.793 [2024-11-20 14:07:40.368368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.653 ms 00:47:32.793 [2024-11-20 14:07:40.368379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.440485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:32.793 [2024-11-20 14:07:40.440587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:47:32.793 [2024-11-20 14:07:40.440606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:32.793 [2024-11-20 14:07:40.440617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.440694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:32.793 [2024-11-20 14:07:40.440706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:47:32.793 [2024-11-20 14:07:40.440738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:32.793 [2024-11-20 14:07:40.440749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.440907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:32.793 [2024-11-20 14:07:40.440937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:47:32.793 [2024-11-20 14:07:40.440949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:32.793 [2024-11-20 14:07:40.440958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:32.793 [2024-11-20 14:07:40.440982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:32.793 [2024-11-20 14:07:40.440999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:47:32.793 [2024-11-20 14:07:40.441009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:32.793 [2024-11-20 14:07:40.441019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.579193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.579286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:47:33.054 [2024-11-20 14:07:40.579305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.579316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.693547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.693636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:47:33.054 [2024-11-20 14:07:40.693653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.693663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.693829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.693842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:47:33.054 [2024-11-20 14:07:40.693854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.693863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.693916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.693927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:47:33.054 [2024-11-20 14:07:40.693945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.693969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.694131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.694158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:47:33.054 [2024-11-20 14:07:40.694170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.694180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.694229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.694242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:47:33.054 [2024-11-20 14:07:40.694252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.694267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.694316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.694327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:47:33.054 [2024-11-20 14:07:40.694337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.694346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.694399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:47:33.054 [2024-11-20 14:07:40.694411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:47:33.054 [2024-11-20 14:07:40.694426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:47:33.054 [2024-11-20 14:07:40.694435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:47:33.054 [2024-11-20 14:07:40.694586] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 450.171 ms, result 0 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:47:34.437 Remove shared memory files 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid84452 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:47:34.437 00:47:34.437 real 1m32.912s 00:47:34.437 user 2m7.797s 00:47:34.437 sys 0m24.250s 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:34.437 14:07:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:47:34.437 ************************************ 00:47:34.437 END TEST ftl_upgrade_shutdown 00:47:34.437 ************************************ 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@14 -- # killprocess 77568 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@954 -- # '[' -z 77568 ']' 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@958 -- # kill -0 77568 00:47:34.697 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (77568) - No such process 00:47:34.697 Process with pid 77568 is not found 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 77568 is not found' 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=84909 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:47:34.697 14:07:42 ftl -- ftl/ftl.sh@20 -- # waitforlisten 84909 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@835 -- # '[' -z 84909 ']' 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:47:34.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:47:34.697 14:07:42 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:34.697 [2024-11-20 14:07:42.293203] Starting SPDK v25.01-pre git sha1 d58114851 / DPDK 24.03.0 initialization... 00:47:34.697 [2024-11-20 14:07:42.293332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84909 ] 00:47:34.958 [2024-11-20 14:07:42.469835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:47:34.958 [2024-11-20 14:07:42.611640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:47:36.340 14:07:43 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:47:36.340 14:07:43 ftl -- common/autotest_common.sh@868 -- # return 0 00:47:36.340 14:07:43 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:47:36.340 nvme0n1 00:47:36.340 14:07:43 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:47:36.340 14:07:43 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:47:36.340 14:07:43 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:47:36.599 14:07:44 ftl -- ftl/common.sh@28 -- # stores=fb5d9f83-2fe4-4eed-b903-3a6b98068e83 00:47:36.599 14:07:44 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:47:36.599 14:07:44 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb5d9f83-2fe4-4eed-b903-3a6b98068e83 00:47:36.859 14:07:44 ftl -- ftl/ftl.sh@23 -- # killprocess 84909 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@954 -- # '[' -z 84909 ']' 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@958 -- # kill -0 84909 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@959 -- # uname 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84909 00:47:36.859 killing process with pid 84909 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84909' 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@973 -- # kill 84909 00:47:36.859 14:07:44 ftl -- common/autotest_common.sh@978 -- # wait 84909 00:47:39.400 14:07:47 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:47:39.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:39.919 Waiting for block devices as requested 00:47:39.919 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:47:39.919 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:47:40.179 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:47:40.179 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:47:45.478 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:47:45.478 14:07:52 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:47:45.478 Remove shared memory files 00:47:45.478 14:07:52 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:47:45.478 14:07:52 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:47:45.478 14:07:52 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:47:45.478 14:07:52 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:47:45.478 14:07:52 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:47:45.478 14:07:52 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:47:45.478 ************************************ 00:47:45.478 END TEST ftl 00:47:45.478 ************************************ 00:47:45.478 00:47:45.478 real 10m41.563s 00:47:45.478 user 13m26.326s 00:47:45.478 sys 1m23.887s 00:47:45.478 14:07:52 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:47:45.478 14:07:52 ftl -- common/autotest_common.sh@10 -- # set +x 00:47:45.478 14:07:52 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:47:45.478 14:07:52 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:47:45.478 14:07:52 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:47:45.478 14:07:52 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:47:45.478 14:07:52 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:47:45.478 14:07:52 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:47:45.478 14:07:52 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:47:45.478 14:07:52 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:47:45.478 14:07:52 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:47:45.478 14:07:52 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:47:45.478 14:07:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:47:45.478 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:47:45.478 14:07:52 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:47:45.478 14:07:52 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:47:45.478 14:07:52 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:47:45.478 14:07:52 -- common/autotest_common.sh@10 -- # set +x 00:47:47.389 INFO: APP EXITING 00:47:47.389 INFO: killing all VMs 00:47:47.389 INFO: killing vhost app 00:47:47.389 INFO: EXIT DONE 00:47:47.648 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:48.218 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:47:48.218 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:47:48.218 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:47:48.218 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:47:48.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:47:49.047 Cleaning 00:47:49.047 Removing: /var/run/dpdk/spdk0/config 00:47:49.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:47:49.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:47:49.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:47:49.047 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:47:49.047 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:47:49.047 Removing: /var/run/dpdk/spdk0/hugepage_info 00:47:49.047 Removing: /var/run/dpdk/spdk0 00:47:49.047 Removing: /var/run/dpdk/spdk_pid57831 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58077 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58316 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58421 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58483 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58616 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58640 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58850 00:47:49.047 Removing: /var/run/dpdk/spdk_pid58967 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59074 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59201 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59316 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59356 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59392 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59468 00:47:49.047 Removing: /var/run/dpdk/spdk_pid59596 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60062 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60139 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60214 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60236 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60397 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60413 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60572 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60588 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60663 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60692 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60762 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60785 00:47:49.047 Removing: /var/run/dpdk/spdk_pid60997 00:47:49.047 Removing: /var/run/dpdk/spdk_pid61035 00:47:49.047 Removing: /var/run/dpdk/spdk_pid61117 00:47:49.047 Removing: /var/run/dpdk/spdk_pid61322 00:47:49.307 Removing: /var/run/dpdk/spdk_pid61422 00:47:49.307 Removing: /var/run/dpdk/spdk_pid61465 00:47:49.307 Removing: /var/run/dpdk/spdk_pid61930 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62034 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62154 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62218 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62238 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62328 00:47:49.307 Removing: /var/run/dpdk/spdk_pid62975 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63017 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63517 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63626 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63752 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63809 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63836 00:47:49.307 Removing: /var/run/dpdk/spdk_pid63867 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65751 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65905 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65909 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65921 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65995 00:47:49.307 Removing: /var/run/dpdk/spdk_pid65999 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66017 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66083 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66093 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66105 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66177 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66192 00:47:49.307 Removing: /var/run/dpdk/spdk_pid66204 00:47:49.307 Removing: /var/run/dpdk/spdk_pid67706 00:47:49.307 Removing: /var/run/dpdk/spdk_pid67826 00:47:49.307 Removing: /var/run/dpdk/spdk_pid69254 00:47:49.307 Removing: /var/run/dpdk/spdk_pid70999 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71090 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71169 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71280 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71382 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71479 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71565 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71646 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71756 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71853 00:47:49.307 Removing: /var/run/dpdk/spdk_pid71959 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72041 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72127 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72237 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72340 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72440 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72523 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72609 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72726 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72827 00:47:49.307 Removing: /var/run/dpdk/spdk_pid72929 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73013 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73093 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73172 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73248 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73357 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73449 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73554 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73638 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73719 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73799 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73880 00:47:49.307 Removing: /var/run/dpdk/spdk_pid73989 00:47:49.307 Removing: /var/run/dpdk/spdk_pid74086 00:47:49.567 Removing: /var/run/dpdk/spdk_pid74240 00:47:49.567 Removing: /var/run/dpdk/spdk_pid74531 00:47:49.567 Removing: /var/run/dpdk/spdk_pid74574 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75041 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75226 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75328 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75438 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75497 00:47:49.567 Removing: /var/run/dpdk/spdk_pid75523 00:47:49.567 Removing: /var/run/dpdk/spdk_pid76010 00:47:49.567 Removing: /var/run/dpdk/spdk_pid76081 00:47:49.567 Removing: /var/run/dpdk/spdk_pid76179 00:47:49.567 Removing: /var/run/dpdk/spdk_pid76610 00:47:49.567 Removing: /var/run/dpdk/spdk_pid76762 00:47:49.567 Removing: /var/run/dpdk/spdk_pid77568 00:47:49.567 Removing: /var/run/dpdk/spdk_pid77720 00:47:49.567 Removing: /var/run/dpdk/spdk_pid77984 00:47:49.567 Removing: /var/run/dpdk/spdk_pid78091 00:47:49.567 Removing: /var/run/dpdk/spdk_pid78412 00:47:49.567 Removing: /var/run/dpdk/spdk_pid78710 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79059 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79308 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79428 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79492 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79613 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79653 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79719 00:47:49.567 Removing: /var/run/dpdk/spdk_pid79906 00:47:49.567 Removing: /var/run/dpdk/spdk_pid80210 00:47:49.567 Removing: /var/run/dpdk/spdk_pid80585 00:47:49.567 Removing: /var/run/dpdk/spdk_pid80950 00:47:49.567 Removing: /var/run/dpdk/spdk_pid81357 00:47:49.567 Removing: /var/run/dpdk/spdk_pid81808 00:47:49.567 Removing: /var/run/dpdk/spdk_pid81957 00:47:49.567 Removing: /var/run/dpdk/spdk_pid82043 00:47:49.567 Removing: /var/run/dpdk/spdk_pid82558 00:47:49.568 Removing: /var/run/dpdk/spdk_pid82626 00:47:49.568 Removing: /var/run/dpdk/spdk_pid83020 00:47:49.568 Removing: /var/run/dpdk/spdk_pid83371 00:47:49.568 Removing: /var/run/dpdk/spdk_pid83817 00:47:49.568 Removing: /var/run/dpdk/spdk_pid83947 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84006 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84074 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84137 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84202 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84452 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84533 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84598 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84656 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84715 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84779 00:47:49.568 Removing: /var/run/dpdk/spdk_pid84909 00:47:49.568 Clean 00:47:49.827 14:07:57 -- common/autotest_common.sh@1453 -- # return 0 00:47:49.827 14:07:57 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:47:49.827 14:07:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:49.827 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:47:49.827 14:07:57 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:47:49.827 14:07:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:47:49.827 14:07:57 -- common/autotest_common.sh@10 -- # set +x 00:47:49.827 14:07:57 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:47:49.827 14:07:57 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:47:49.827 14:07:57 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:47:49.827 14:07:57 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:47:49.827 14:07:57 -- spdk/autotest.sh@398 -- # hostname 00:47:49.828 14:07:57 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:47:50.086 geninfo: WARNING: invalid characters removed from testname! 00:48:16.634 14:08:21 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:16.893 14:08:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:19.429 14:08:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:21.339 14:08:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:23.879 14:08:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:25.787 14:08:33 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:48:27.739 14:08:35 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:48:27.739 14:08:35 -- spdk/autorun.sh@1 -- $ timing_finish 00:48:27.739 14:08:35 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:48:27.739 14:08:35 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:48:27.739 14:08:35 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:48:27.739 14:08:35 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:48:27.739 + [[ -n 5466 ]] 00:48:27.739 + sudo kill 5466 00:48:27.748 [Pipeline] } 00:48:27.765 [Pipeline] // timeout 00:48:27.770 [Pipeline] } 00:48:27.784 [Pipeline] // stage 00:48:27.788 [Pipeline] } 00:48:27.802 [Pipeline] // catchError 00:48:27.810 [Pipeline] stage 00:48:27.812 [Pipeline] { (Stop VM) 00:48:27.824 [Pipeline] sh 00:48:28.105 + vagrant halt 00:48:30.645 ==> default: Halting domain... 00:48:38.782 [Pipeline] sh 00:48:39.065 + vagrant destroy -f 00:48:41.602 ==> default: Removing domain... 00:48:41.875 [Pipeline] sh 00:48:42.160 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:48:42.170 [Pipeline] } 00:48:42.187 [Pipeline] // stage 00:48:42.193 [Pipeline] } 00:48:42.209 [Pipeline] // dir 00:48:42.215 [Pipeline] } 00:48:42.231 [Pipeline] // wrap 00:48:42.239 [Pipeline] } 00:48:42.253 [Pipeline] // catchError 00:48:42.263 [Pipeline] stage 00:48:42.265 [Pipeline] { (Epilogue) 00:48:42.281 [Pipeline] sh 00:48:42.569 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:48:47.866 [Pipeline] catchError 00:48:47.868 [Pipeline] { 00:48:47.881 [Pipeline] sh 00:48:48.169 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:48:48.169 Artifacts sizes are good 00:48:48.179 [Pipeline] } 00:48:48.193 [Pipeline] // catchError 00:48:48.205 [Pipeline] archiveArtifacts 00:48:48.216 Archiving artifacts 00:48:48.361 [Pipeline] cleanWs 00:48:48.373 [WS-CLEANUP] Deleting project workspace... 00:48:48.373 [WS-CLEANUP] Deferred wipeout is used... 00:48:48.380 [WS-CLEANUP] done 00:48:48.382 [Pipeline] } 00:48:48.397 [Pipeline] // stage 00:48:48.402 [Pipeline] } 00:48:48.416 [Pipeline] // node 00:48:48.422 [Pipeline] End of Pipeline 00:48:48.460 Finished: SUCCESS